Data Science Products

Data Science Products: Top 3 Things You Must Know

Introduction

Ever wondered why Clive Humby famously coined the 'Data is new oil' phrase?  Well, this blog article tells you exactly what he meant. The latest advancements in data analytics, cloud infrastructure, and increased emphasis on making data-driven decisions have opened up several avenues for developing Data Science Products.  People can build amazing data-based products that can generate revenues. In other words, the data is the new money-making machine. In this article, we will discuss the top 3 things that you must know about data science products.

# 1: What is a Data Science Product?

Data Science Product is a new era money-making machine that is fueled by data and built using machine learning techniques. It takes data as input and gives out valuable business insights as an output.

#2: Examples of Data-based Products

Classic examples of data products include Google search and Amazon product recommendations, both these products improve as more users engage. But the opportunity for building data-based products extends far beyond the tech giants. These days companies of all range of sizes and across almost all sectors are investing in their own data-powered products.  Some inspirational examples of data science products that are developed by non-tech giants are as below:

HealthWorks

It mimics consumer choice in Medicare Advantage. The product compares and contrasts more than 5000+ variables across plan costs, plan benefits, market factors, regulatory changes, and many more. It helps Health Plans identify the top attributes that lead to plan competitiveness, predict enrolments, design better products and create winning plans. 

Cognitive Claims Assistant

Damage assessment in vehicles is an important step for insurance claims and auto finance industry. Currently, these processes involve manual interventions requiring a long turnaround time. Cognitive Claims Assistant (CCA) by Genpact automates this process. The data product not only reduces cost and time in the process but also accurately estimates the cost of repairs.

#3: How to Build Data Powered Products?

Steps in making data science product

Do you want to build a data science product too? Here are the five steps that will help you to build a good data science product:

Step 1: Ideation and Design of Data Product

Ideation

The first step of building a data science product is Conceptualizing the product. Conceptualization starts with identifying potential opportunities. A good data science product is the one that solves a critical business need. An unsolved business need that can be solved using data is an opportunity for building data products.

Design

Design the data structure that you will need to solve the business need. This often involves brainstorming on various data inputs and their corresponding valuable outputs that will solve the business need.

Step 2: Get the Raw Data

The second step in building data products is getting the data. If you already own the data, you are already covered for this. All you have to do is move on to the next step. If you don’t have the data then you need to generate or gather it.

Step 3: Refine the Data

As they rightly say, data is the new oil but it is of no use until it is refined like an oil. Understand the structure of your data. Refine, clean, and pre-process it if it is unstructured. Always remember the golden rule-‘Garbage in is Garbage out!’ Knowing the data helps you clearly define the inputs and outputs from your data science product.

Step 4: Data Based Product Development

This is the most tricky part in data science product development and needs a strong knowledge of the business process, business needs, statistics, mathematics, and coding. This knowledge forms the backbone of the data product. In the majority of the cases, this step involves building a machine learning model using domain knowledge. In some cases, it could also involve simple graphical outputs for exploratory analysis of the data. No matter what is the output the codes developed for executing the desired process need to be tested and validated for real-life use of the data product.

Step 5: Release!

This is the last step in data product development. In this step, tried, tested, and validated data science product is deployed on a cloud. The data product buyers can simply log in from anywhere in the world and use the product.

Conclusion

Anybody who owns the treasure trove of the data should develop a ‘Data Science Product’ or a ‘Data Product’. Now the question arises, is it possible to build data products without coding knowledge? And the answer is, absolutely yes! You can use our data analytics platforms that are specially built for non-coders. All you have to do is arrange your data meaningfully and just make few clicks to build your base model described in Step 4 of How to build data products as described above. When you deploy the model on the cloud your money-making machine becomes a reality. If you don’t like the idea of doing it all yourself, then you always have an option to outsource.

Like any other product, the success of the data product is dependent on its usability. Half the battle is won with a strong business case. The remaining battle can be won with mathematics, statistics, and computer science. This is exactly where we can contribute. Our aim to accelerate the data product development process. Let's unite your domain knowledge and data with our data modeling capabilities. Let's build amazing data science products!

covid19 vaccines

Covid19 Vaccines: Safety And Efficacy

Introduction

Today everybody is interested to know about the safety and efficacy of the Covid19 vaccines. People want to know more about several vaccine options and the best vaccine that will protect them against Covid19. To answer these and many other questions, in this article, we describe basic science and working Principle behind the Covid19 vaccines.

What is a Vaccine?

Vaccine is a biological product that stimulates an immune response to a specific disease upon inoculation. This process by which a person becomes immune to disease through vaccination (inoculation) is referred to as immunization.

Vaccines: Are They Really Effective?

Data speaks louder than words! According to World Health Organisation (WHO), vaccines prevent more than 20 life-threatening diseases and save 2-3 million deaths every year from diphtheria, pertussis, tetanus, influenza, and measles.

Poliomyelitis, one of the most feared epidemics ever encountered by humans, is eliminated from most countries. WHO declared India a polio-free country in March 2014. This achievement is believed to have been spurred by the extensive pulse polio (vaccination) campaign.

How Efficacious Are COVID-19 Vaccines?

The efficacy of a vaccine is determined in a double-blind randomized controlled trial. It involves randomly assigning participant volunteers to either a treatment or control (placebo) groups; neither the participant nor the experimenter knows who is receiving what treatment. Both the treatment groups live their normal life and are monitored for COVID-19 symptoms over a specified period of time. The efficacy of a vaccine is calculated using the following formula.

v- cases among vaccinated
p- cases among placebo
N- number of placebo
n- number of vaccinated

For example, 43,000 volunteers were enrolled in Pfizer/BioNTech vaccine trials, who were distributed equally between the treatment and placebo groups; 170 in placebo and only 8 in the treatment group developed COVID-19 over the next several months. This corresponds to the vaccine efficacy of 95%.

So, what does 95% efficacy indicate? It doesn’t mean that only 5% of vaccinated people would develop COVID-19 but the vaccinated people are 95% less likely to develop COVID-19 upon exposure.

InnovatorBrand NameEfficacy
Pfizer/BioNTechCommirnaty95%
ModernamRNA-127394%
RDIFSputnik V92%
NovavaxCovovax89%
Bharat BiotechCovaxin81%
Oxford/AstraZenecaCovishield67%
Johnson & JohnsonJanssen COVID-19 vaccine66%
SinopharmaSinovac51%

Though the efficacy of all the vaccines are calculated using the above formula but the trials were performed under different circumstances. Scientists argue that the Pfizer/BioNTech and Moderna vaccine trials were performed during the period when the number of daily cases were significantly low. In contrast, Johnson & Johnson vaccine trials were conducted during the time when there were highest number of daily cases. The probability of the participant volunteers to contract virus was higher in Johnson & Johnson trials. Moreover, the virus strain (B.1.1.7) in Pfizer/BioNTech and Moderna trials is known to be less virulent than the variants (B.1.351 and P.2) present during Johnson & Johnson trials. Also the opportunity of a volunteer to get exposed to the COVID-19 is also highly variable.

There’s a myth around COVID-19 vaccine that vaccination averts all the possibilities of contracting the virus. No! That’s not right! There’s a possibility of infection even after vaccination but the vaccine trains our immune system how to deal with the virus. Consequently, the patient might experience mild to moderate symptoms but significantly reduces the chances of hospitalisation and deaths.

Is there any other way to determine which vaccine is the most efficacious? Yes! For that matter, all the vaccines have to be studied together in a double blind randomised clinical trial. But that’s not the need of the hour.

Are Covid19 Vaccines Safe Enough?

Most COVID-19 vaccines are very safe. It is perfectly normal to have mild side effects after vaccination. The common side effects include pain, redness and swelling. Other side effects are tiredness, headache, muscle pain, nausea, fever, and chills. These side-effects are short-term and are attributable to the mechanism of action of the vaccines.

However, it is important to mention that four cases of serious adverse reactions ( blood clotting with low platelets count) were reported in Norway after vaccination with AstraZeneca vaccine. Two of them died due to blood haemorrhage and other two were hospitalised. Similar adverse reactions were reported in Denmark, Italy and United Kingdom. However, European Medical Association (EMA’s) safety committee concluded that the unusual blood clot with low platelets count are very rare side effects of the vaccine. Only 1 out of 100,000 cases have chances of developing blood clotting reactions which is many folds low as compared to the COVID-19 related deaths.

Will Covid19 Vaccines Demonstrate Same Effectiveness Despite Mutations in the Virus?

As the data continues to be collected on the new variants of the COVID-19, most of the vaccines are reported to provide protection against virus because these vaccines elicit a broad immune response involving a range of antibodies and cells. Researchers claim that most of the vaccines are designed based on the spike protein that is less likely to be mutated. Conversely, as long as the  spike protein is not mutating, the vaccines would continue providing protection against COVID-19.

Conclusion

Herd immunity requires a large enough population to be vaccinated to prevent widespread transmission of COVID-19. The rate at which the population is vaccinated is going to play a crucial role in ending the deaths and emergency hospitalizations. It is important to reiterate that vaccines are very safe and mild short-term side effects are the common ones.

Curious to know more about Let’s Excel Analytics Solutions LLP?

Predictive Analytics in Healthcare

Introduction to Predictive Analytics in Health Care

Predictive analytics in Healthcare has had a huge impact on the healthcare system and finds a great many applications driving innovations related to patient care. The purpose of this blog is to apprise you of the wonders predictive analytics is doing in the patient-care.

“Predictive analytics is a branch of Data Science that deals with the prediction of future outcomes. However, it is based on the analysis of past events to predict the future outcomes.”

Talking about predictions has always fascinated mankind since time immemorial. Nostradamus set forth prophecies about catastrophes, disease, health, and well-being. Who would have known that this art of foretelling could transform into a Science, Predictive Analytics!

Advantages of the applications of predictive analytics in healthcare.

  • Predict curable diseases at the right time.
  • Predict pandemic and epidemic outbreaks.
  • Mitigate the risks of clinical decision making.
  • Reduce the cost of medical treatments.
  • Improve the quality of patient life.

“Patient-care has quite a transitioned from relying on the extraordinary ability of a physician to diagnose and treat diseases to the use of sophisticated and  state-of-the-art technology to provide innovative patient care”

For the matter of discussion, the applications of predictive analytics in healthcare have been divided into three aspects of patient care.

  1. Diagnosis
  2. Prognosis
  3. Treatment

Use of predictive analytics in medical diagnosis

Early detection of cancer

Many Machine Learning algorithms are being used by clinicians for the screening and early detection of precancerous lesions. QuantX (Qlarity Imaging) is the first USFDA approved ML breast cancer diagnosis system for predictive analytics. This computer-aided (CAD) diagnosis software system assists radiologists in the assessment and characterization of potential breast anomalies using Magnetic Resonance Imaging (MRI) data. Another image processing ML application is developed by the National Cancer Institute (NCI) that uses digital images taken of women’s cervix to identify potentially cancerous changes that require immediate medical attention.

Predisposition to certain diseases

Predictive analytics has a huge potential to determine the occurrence and predisposition of genetic and other diseases. This domain leverages the data collected from the human genome project to study the effect of genes linked to certain disorders. This is known as pleiotropic gene information. Many such models have been developed to determine the risk of manifesting diseases like osteoporosis, diabetes, hypertension, etc., in the later stages of life.

Prediction of disease outbreaks

The prediction of disease outbreaks that could eventually turn epidemic and pandemic is an indispensable tool for emergency preparedness and disaster management. Many lives could be saved if the outbreak of such diseases is known to us in the first place. However, the efforts of researchers modeling the spread of deadly diseases like Covid19, Zika, and Ebola viruses have yet to bear the fruit of success. The most probable reason could be the complexities in the data collection procedures and the highly dynamic nature of the pathogens like viruses.

Use of predictive analytics in disease prognosis.

Deterioration of patients in ICU

The predictive algorithms developed from continuous monitoring of the vital signs of a patient are used to predict the probability of the patient deterioration and need for immediate intervention in the next 1 hour or so. It is well established that early intervention has a huge success in preventing patient deaths. These predictive algorithms are also used in the remote monitoring of patients in intensive care units (ICU). The remote monitoring of patients, also known as Tele-ICU, is highly effective for aiding intensivists and nurses during situations like Covid19 when the healthcare system is pushed to the limit.

Reducing hospital stays

Prolonged hospital stay and readmission rates are very expensive in the patient’s pockets. The analysts are constantly looking at the patient data to monitor the patient prognosis to treatment that averts any unwarranted hospital stay. The effect of the future outcomes on patient health can also be determined to customize the patient-specific treatment modalities that prevent readmissions.

Risk scoring for chronic diseases

Predictive analytical applications have been designed that can identify patients who are at high risk of developing chronic conditions in the early stage of disease progression. The early detection of the disease progression allows better management of the condition. In the majority of the cases, the disease prognosis could be controlled to a great extent to have a significant effect on the patient’s quality of life.

Predictive analytics in treatment of diseases

Virtual hospital settings

Philips developed a concept technology of virtual hospital settings for predictive care of high-risk patients at their homes. This analytics employs data from the medical records of thousands of patients and the medical history of a particular patient (senior) to build predictive models that can identify the patients who are at risk of emergency treatment in the next month. Various devices have been developed that provide alerts for potential emergency treatment and are known as Automatic Fall Detection (AFD). The AFD collects data continuously from the patient’s movements in all directions (using accelerometer sensors) and uses the data to pick the subtle differences between normal gait and potential fall situations. This device has gained so much popularity that Apple added this feature to Apple Watch Series 4.

Digital twins

Another marvel of predictive analytics for patient care is digital twin technology. In this technology, predictive analytics, IoT, and cloud computing tools are used to develop a virtual representation of the human body. The virtual representation mimics the actual biochemical processes in the human body by constantly collecting data from millions of such patients. The data is modeled to project the possible cause of the patient’s symptoms and suggest the most viable treatment modality specific to the patient’s condition. The treatment modality recommended by the twin can be assessed virtually before implementation on the patient and possible complications can be known and averted in the first place.

Conclusion

The adoption of predictive analytics has ushered personalized and patient-centric transformations into the healthcare industry. However, its scope is not limited to patients alone, it has a huge potential to overhaul other areas of the healthcare system like administration, supply chain, engineering, public relations, and so on.

Interested in building predictive analytical capabilities in your organization?

DataSets

Searching DataSets for Data Analytics Projects and Self Directed Learning

Introduction

Technology has been evolving very expeditiously over the past decade. These advancements have set off a trend for learning with technology. To satisfy the learning needs, people are embracing self-directed learning. It is important to mention that as the world is preparing for the Fourth Industrial Revolution (I4.0), the workforce has to keep up with the advancements in technology. At the same time, there has been quite a buzz around the Machine Learning and Artificial Intelligence that forms the heart and soul of the I4.0. In other words, learning Machine Learning is the need of the hour.

Now that it is imperative to learn Machine Learning, there are three success mantras of mastering it: PRACTICE, PRACTICE, and PRACTICE. But the basic question that comes up in our mind is, what to practice on. A true dataset should be available to work on as if dealing with a real ML problem. In this blog, we will be discussing some of the most popular data repositories for extracting sample datasets for mastering Machine Learning skills.

Data, DataSet, and Databases

Before we begin, it’s important to clear the air by defining the basic definitions related to datasets.

What is data?

  • Data is a collection of information that is based on certain facts.

What is a dataset?

  • Dataset is a structured collection of data.

What is a database?

  • The database is an organized collection of multiple datasets.

The data which is used can be collected from various sources such as experimentations, surveys, polls, interviews, human observations, etc. It can also be generated by machines and directly archived into databases.

DataSets For Machine Learning Projects

DataSets

The choice of data collection is a very crucial step in the success of the Machine Learning program. The source of the datasets is equally important, as it is a matter of the reliability and trueness of the collected data. Some of the most popular data repositories that are required for acquiring Machine Learning datasets are discussed below.

KAGGLE  

This platform is owned by Google LLC and is a repository of huge data sets and code that is published by its users, the Kaggle community. Kaggle also allows its users to build models with the Kaggle datasets. The users can also discuss the problems faced in analyzing the data with its user community.

Kaggle also provides a platform for various open-source data Science courses and programs. It is a comprehensive online community of Data Science professionals where you can find solutions to all your data analytics problems.

UCI MACHINE LEARNING REPOSITORY

UCI Machine Learning repository is an open-source repository of Machine Learning databases, domain theories, and data generators. This repository was developed by a graduate student, David Aha, at the University of California, Irvine (UCI) around 1987. Since then, the Centre for Machine Learning and Intelligent Systems at the UCI is overseeing the archival of the repository. It has been widely used for empirical and methodological research of Machine Learning algorithms.

QUANDL

Quandl is a closed-source repository for financial, economic, and alternative datasets used by analysts worldwide to influence their financial decisions. It is used by the world’s topmost hedge fund, asset managers, and investment banks.

Due to its premiere and closed-source nature, it cannot be used for just practicing Machine Learning algorithms. But citing its specialization in financial datasets, it is very important to include Quandl in this list. Quandl is owned by NASDAQ, American Stocks Exchange based in New York City.

WHO

World Health Organisation (WHO) is a specialized agency of the United Nations Organisation headquartered in Geneva, Switzerland. It is responsible for monitoring international health and continually collects data related to health across the world. WHO has named its repository of data as Global Health Observatory (GHO). The GHO data repository collects and archives health-related statistical data of its 194 member countries.

If you are looking for developing Machine Learning algorithms on health-related problems, GHO is one of the best sources of data collection. It is a repository of a wide variety of information ranging from a particular disease, epidemics, and pandemics, world health programs, and policies.

Google dataset search is a search engine for datasets powered by Google. It uses a simple keyword search to acquire datasets hosted in the different repositories across the web. It hosts around 25 million publicly available datasets to its users. Most data in this repository is government data besides a wide variety of other datasets.

AMAZON WEB SERVICES (AWS)

Amazon Web Services is known as the world’s largest cloud services provider. AWS has a registry of datasets that can be used to search and host a wide variety of resources for Machine Learning. This repository is cloud-based, allowing users to add and retrieve all forms of data irrespective of the scale. AWS also enables data visualization, data processing, and real-time analytics to make well-informed decisions driven by data.

Conclusion

The human resources are prepping up for Workforce 4.0 by constantly acquiring new skills. Machine Learning is one of the most indispensable skills for tomorrow’s workforce. In today’s world of the digital revolution, information is available at our fingertips. The datasets for Machine Learning are also available as open-source and could be utilized to build algorithms for making informed decisions.

Let’s Excel Analytics Solutions LLP can support your organizational needs to develop digitalized tools for reinventing the business.


Curious to know more?

Digital Twin

Digital Twin: Introduction, It’s Working and Applications

What is a Digital Twin?

A digital twin is a virtual reflection of a physical object, generally driven by marvels of:

  • Internet of Things (IoT),
  • Cloud, and
  • Advanced Analytics.

A digital twin constantly collects real-time data and simulates it into the virtual replicate of the physical object. This virtual replicate then can be used to provide solutions to the problems experienced by the physical object.

The term ‘Digital Twin’ was coined by Michael Grieves in 2002. However, the concept of Digital Twin is as old as Apollo 13 (the 1970s). Though Apollo 13 was a failed moon mission, it hinted towards the inception of virtualization of the physical world. On its way, around 330,000 km from Earth; the Kennedy Space Centre received an SOS: “Houston, we have a problem”. The oxygen levels in the spacecraft had started declining fast. The dramatic rescue mission was started to bring the onboard astronauts back to the Earth. The key to this mission was that NASA had a physical replica of Apollo 13 on Earth. The Engineers performed a series of troubleshooting measures on the replica and came up with the best possible solution for bringing back the quickly declining Apollo 13. Rescuing all three members onboard was done successfully. This mission revolutionized the future of Space Exploration and it is also popularly known as a successful failure.  

“Houston, we have a problem”.

Unlike Apollo 13, all the replicas of current NASA programs are digitally and virtually monitored. NASA has been continuously using the real digital twin technology. It is used to solve the day-to-day problems encountered in the operation and maintenance of its space programs; without actually being physically present.

Another milestone in the history of digital twins was the launch of Predix by GE Digitals (a subsidiary of GE Electric). Predix is an Internet of Things (IoT) platform that secures cloud computing and data analytics. Used for improving the operational efficiencies of the machines. In 2015, Collin J Paris, Vice President of GE Global Research Center; demonstrated to the world:

  • how a computer program could predictively diagnose malfunctions in the operation of a steam turbine and,
  • even could perform the maintenance activities remotely.

GE has been continuously monitoring hundreds of such turbines using their digital twins for over a decade now.

Working of Digital Twin

  • The physical object, also known as an asset, is designed to have many, sometimes hundreds, of sensors. These sensors capture real-time data (about almost everything) and send it across to its digital twin.
  • The digital twin analyses this useful information. Further, mixes this information with the hundreds of other similar assets, using:
    • the Internet of Things (IoT),
    • cloud connectivity, and
    • predictive data analytics.
  • Additionally, the information shared with the digital twin is simulated to the various design features of the asset.
  • The simulation is used to answer two important questions viz.,
    • What could go wrong?
    • What could be done about it?
  • This knowledge is used to build a learning platform that makes digital twins smarter every time additional information is added.

Applications of Digital Twin

Use of digital twins in patient care

Philips is pioneering on the concept of what is referred to as the virtual representation of a patient’s health status, i.e., each patient would have a digital twin that enables the right type of treatment in the right way and at the right time. For example, if a patient presents with a particular symptom, its digital twin uses medical diagnosis data in combination with the patients’ medical history along with a variety of medical information available to build a digital model that recommends the patient-specific treatment modality with the best possible outcome. The digital twin also enables simulation of the treatment modality on the patient before implementing the procedure on the patient in the real case scenario. During the performance of the procedure, it ensures fidelity of the procedure and can even predict any unforeseen complication that can be averted in the first place. Moreover, all this information is stored in the cloud and can be retrieved anywhere at any time.

Use of digital twins in manufacturing 

The digitalized twin of a manufacturing process uses IoT sensors that collect real-time process data continuously. The IoT sensors enable uninterrupted monitoring of the process. This increases the overall performance of the manufacturing process. Continuous monitoring also allows anticipation of the maintenance needs through the use of advanced analytics. This could reduce the possible process outages and downtimes that save millions of dollars. The amalgamation of advanced analytics and IoT can be used to manage the performance of the manufacturing process and which, in turn, improves the quality of the final product. It is important to note that the digital twin of a manufacturing process is not a single application but hundreds of interconnected applications. The communication between all these applications puts the process into a state of control.

The virtualization process is also taking over the most vital component of the manufacturing industry, i.e., supply chain management. The digital twin of the supply chain can automate the organizational processes. The twin can automate the purchasing and tracking of the assets and consumables based on the anticipated usage. If there is a shortage of raw material, the twin can assess the possible impacts on the operations and also offers the best-case scenario and solutions. This makes an organization prepared for overcoming the logistic challenges and hence improve the overall productivity of the organization

Future of Digital Twin Technology

The digital twin technology is rapidly expanding its applicability in almost every industry and, in fact, almost everywhere. Due to the adoption of the fourth industrial revolution, Industry 4.0, the market of the digital twin is expected to grow enormously. The global market of digital twins was valued at $3.1 billion in 2020 and is projected to grow $48.2 billion by 2026 at a Compound Annual Growth Rate (CAGR) of 58%.

The outbreak of COVID 19 has upheld the implementation of digital twins in business models, particularly in the biotechnology and pharmaceutical industries. The industry is gearing up to upgrade the existing infrastructure and adopt the digitalized technologies to avoid crippling losses due to frequent lockdowns. The Governments are also very keen on adopting the technology as can be seen in the design of smart cities across the world. The smart city initiative of Singapore is the best-fit example for the application of digital twin technology. This model combines different technologies to develop a digital version of the city’s resources, processes, and procedures. The digital version of the city enables superintend of the city using a simple computer program.

Conclusion

The new normal of the pandemic has redirected and reinforced the adoption of Digital twin technologies into every aspect of our lives. Digital twin technology is going to be a game-changer in the fields like continuous manufacturing. There are innumerable advantages of the adoption of the technology like cost leadership, environmental sustainability, economic stability, energy efficiency, etc. This is going to change the way our businesses have ever been managed. Let’s Excel Analytics Solutions LLP can support your organizational needs to develop digitalized tools for reinventing the business.


Curious to know more?


Chemometrics and How to Use It?

Introduction

Chemometrics” is a combination of two words “chemo” and “metrics” which signifies the application of computational tools to Chemical Sciences. Coined by a Swedish Scientist, Svante Wold, in 1972. Later in 1974, Svante Wold and Bruce Kowalski founded the International Chemometrics Society (ICS). ICS describes chemometrics as the chemical discipline that uses mathematical and statistical models to
a) design or select optimal measurement procedures and experiments, and
b) to provide maximum chemical information by analyzing chemical data.

How does Chemometrics help design optimal experiments

Classical chemistry depends on the conventional One-factor-at-a-time (OFAT) for building on the understanding of the process chemistry, performance of the process, and product characterizations. However, these conventional techniques suffer from many drawbacks such as:

  • OFAT studies are time-consuming and need a greater number of experimental
  • It does not give any information about potential interactions between the two or more factors, and
  • OFAT studies may or may not give the optimal settings for the process or the product attributes.

The chemometrics, in turn, employs multivariate mathematical and statistical tools in combination with computational techniques to investigate the effect of multiple factors on the optimality of the process and product attributes. The multivariate data is modeled into a mathematical equation that can predict the best optimal settings for the process and the effect of the excursions of the process parameters on the process performance and the product quality.

The outcome of the multivariate investigation allows identification of the multidimensional design space within which the process is not impacting the process performance and product quality attributes. Moreover, multivariate strategies cover multiple process insights into a single multivariate design of the experiment. The adoption of the multivariate design of experiments offers multiple advantages over the conventional OFAT like:

  • Reduces the product development timelines significantly,
  • Significantly reduce the product development costs in a highly competitive market.
  • Maximizes the total information obtained from the experiment.

How does Chemometrics help derive maximum information from the chemical data?

The multivariate analysis strategy in the analysis of the chemical data starts with the pretreatment of the chemical data, also known as data preprocessing. It involves the approaches, where:

  • The data is scaled and coded,
  • Cleaned for outliers,
  • Checked for errors and missing values, and
  • Transformed, if need be, into a format that is explicitly comprehensible by the statistical and mathematical algorithms.

After the preprocessing of the data, the chemometric tools look for the patterns and informative trends in the data. This is referred to as pattern recognition. Pattern recognition uses machine learning algorithms to identify trends and patterns in the data. These machine learning algorithms, in turn, employ the historical data stored in the data warehouses to predict the possible patterns in the new set of data. The pattern recognition ML tools use either supervised or unsupervised learning algorithms. The unsupervised algorithms include Hierarchical Cluster Analysis (HCA) and Principal Components Analysis (PCA) whereas supervised algorithms have K Nearest Neighbours (KNN).

What are the Different Tools and Techniques used in Chemometrics?

With advancements in time, chemometrics has added multiple feathers in its cap rather than being a single tool for its application in the Chemical Sciences. A wide variety of the disciplines that contributed to the advancements of the field of Chemometrics are shown in the figure below. It has been adding multiple techniques each time to expand its applicability in the Research & Development of the chemical sciences.

  • Multivariate Statistics & Pattern Recognition in the Chemometrics

Multivariate statistical analysis refers to the concurrent analysis of multiple factors to derive the totality of the information from the data. The information derived may be the effect of individual factors, the interaction between two or more factors, and the quadratic terms of the factors. As multivariate data analysis involves estimation of almost all the possible effects in the data, these analysis techniques have very high precision and help make highly predictable conclusions. The multivariate statistical tools and techniques find plenty of applications in following industries:

  • Pharma and Life Sciences
  • Food and Beverages
  • Agriculture
  • Chemical
  • Earth & Space
  • Business Intelligence

Some of the most popular and commonly used multivariate modelling approaches are described briefly below.

  • Principal Components Analysis

Data generated in chemometrics, particularly in spectroscopic analysis, is enormous. Such datasets are highly correlated and difficult to model. For that matter, Principal Components Analysis (PCA) creates new uncorrelated variables known as principal components. PCA is a dimensionality reduction technique that enhances the interpretability of large datasets by transforming large datasets into smaller variables without losing much of the information. Let’s Excel Analytics Solutions LLP offers a simple yet highly capable web-based platform for PCA, branded as the MagicPCA.

  • Linear Discriminant Analysis

Linear discriminant analysis is another multivariate technique that is dependent on dimensionality reduction. However, in LDA the dependent variables are categorical variables and the independent variables could be in the form of intervals. The LDA focuses on establishing a function that can distinguish between different categories of the independent variables. This helps identify the sources of maximum variability in the data. Our experts at Let’s Excel Analytics Solutions LLP have developed an application, namely niceLDA, that can solve your LDA problems.

  • Partial Least Squares

Partial Least Squares (PLS)  is a multivariate statistical tool that bears some resemblance with the Principal Components Analysis. It reduces the number of variables to a smaller set of uncorrelated variables and subsequently performs linear regression on them.  However, unlike linear regression, PLS fits multiple responses in a single model. Our programmers at Let’s Excel Analytics Solutions LLP have developed a user-friendly web-based application for partial least square regression, EasyPLS.

Application of Chemometrics in Analytical Chemistry

Chemometrics finds its application throughout the entire lifecycle of the Analytical Sciences right from the method development and validation, development of the sampling procedure, exploratory data analysis, model building and, predictive analysis. The analytical data generated has a multivariate nature and depends on the multivariate data analysis (MVDA) for the exploratory analysis and predictive modeling. The three main areas of the Analytical Sciences where Chemometrics has demonstrated its advantages over the conventional techniques include:

  1. Grouping or cluster analysis refers to a group of analyses where a data set is divided into various clusters in such a way that each cluster has a unique and peculiar property that differs from another set of clusters. A widely known example of cluster analysis is used in flow cytometric analysis of cell viabilities where cells are clustered based on the apoptotic markers. Principal Component Analysis can be used as a powerful tool for understanding the grouping patterns.
  2. Classification analysis is defined as a systematic categorization of chemical compounds based on known physicochemical properties. This allows for the exploration of the alternatives for a known chemical compound with similar physicochemical properties. For example, in the development of the HPLC method for polar and aromatic compounds, data mining for the corresponding solvents can be done by looking into polar and aromatic classes of the solvents. This can be done by building SIMCA models on top of the Principal Component Analysis.
  3. Calibration of the analytical methods: chemometrics-assisted calibration of analytical methods employ multivariate calibration models where multiple, sometimes hundreds, analytes are calibrated at the same time. These multivariate calibration models have many advantages over the conventional univariate calibration models. The major advantages include:
    1. significant reduction of noise,
    2. non-selectivity of the analytical methods,
    3. dealing with interferents and,
    4. outliers can be detected and excluded in the first place.
  4. The Principal Components Analysis and Partial Least Squares are the most commonly used chemometrics tools that are used for developing multivariate calibration models in the development of analytical methods for pharmaceuticals, foods, environmental monitoring, and forensic sciences. The chemometric tools have widely transformed the discipline of the Analytical Sciences by building highly reliable and predictive calibration models, providing tools that assist in their quantitative validations, and contributing to their successful application in highly sensitive chemical analyses.

Application of Chemometrics in Studying QSAR in Medicinal Chemistry

QSAR stands for “quantitative structure and activity relationship” and refers to the application of a wide variety of computational tools and techniques used to determine the quantitative relationship between the chemical structure of a molecule and its biological activities. It is based on the principle that each chemical moiety is responsible for a certain degree of biological activity in a chemical molecule and influences the activity of other moieties in the same molecule. In other words, the similarities in the structure of two chemical molecules could correspond to their similarities in biological activities. This forms a basis for predicting the biological activities of new drug molecules in medicinal chemistry.

For QSAR modeling, certain features of a chemical molecule that can potentially affect the biological activities are referred to as molecular descriptors. These molecular descriptors are classified into five major categories and include physicochemical, constitutional, geometric, topological, and quantum chemical descriptors. The biological activities of interest in QSAR correspond to the pharmacokinetic, pharmacodynamic, and toxicological properties of the molecule. Each of the molecular descriptors is referred to as the predictor and the corresponding biological activity as the response. The predictors are then modeled into a mathematical equation using multivariate statistical tools. There are two widely accepted statistical models used for predicting the QSAR of a new molecule and include regression and classification models. The regression models used are multiple linear regression (MLR), principal components regression (PCR), and Partial Least Square regression (PLS). Let’s Excel Analytics Solutions LLP has developed user-friendly interfaces for performing all these operations.

QSAR also has extended its approaches to other fields like chromatography (Quantitative Structure and Chromatography Relationship, QSCR), toxicology (Quantitative Structure and Toxicity Relationship, QSTR), biodegradability (Quantitative Structure and Biodegradability Relationship, QSBR), electrochemistry (Quantitative Structure and Electrochemistry Relationship, QSER) and so on.

Conclusion

Chemometrics has changed the way of designing and developing chemical processes. The information obtained from chemical data has maximized the degree to which processes can be optimized. It has also contributed significantly to the development of highly sensitive and accurate analytical methods by simplifying the complex amount of data generated during the development, calibration, and validation of the analytical data. In general, chemometrics is an ever-expanding domain that is constantly diversifying its applications in a wide variety of fields.

Let’s Excel Analytics Solutions LLP has a proven track record of developing highly reliable chemometric applications that can help you make better business decisions. If you are dealing with a complex problem and looking for the right solution, schedule a free consultation now!

ISPE Pharma 4.0

Pharma 4.0: ISPE’s Vision for Operating Model

Introduction


ISPE stands for International Society for Pharmaceutical Engineering, founded by a group of experts to discuss new challenges faced in pharmaceutical manufacturing. ISPE is a non-profit organization that provides technical and non-technical leadership for managing the life cycle of pharmaceutical products. In 2017, SIG (Special Information Group) was appointed to create a roadmap to facilitate “Industry 4.0” for pharmaceutical manufacturing. The prime objective of SIG was to reinvent Industry 4.0 for the adoption and leverage into the Pharmaceutical Industry. ISPE “Pharma 4.0” is based majorly on similar concepts and ideologies as that of Industry 4.0, it additionally has regulatory aspects based on  ICH guidelines, specifically ICH Q8 and Q10.

History of Industry X.0

Industry 1.0: The First Industrial Revolution, began in the 18th century with the utilization of machines to produce goods and the use of steam power, particularly in the weaving industry.  The mechanization of industries improved human productivity in many folds.

Industry 2.0: The Second Industrial Revolution started in the 19th Century, with the discovery of electricity. During these times, the concept of production and assembly line was introduced, by Henry Ford. The production line eased and increased the efficiency of manufacturing the automobiles, in turn reducing the production cost.

Industry 3.0: The Third Industrial Revolution started in the 20th Century, with the introduction of computers and their utilization to program the Industrial Process under human supervision.

Industry 4.0: The Fourth Industrial Revolution, which is currently ongoing. This revolution has enabled the complete automation of the industrial processes, by making the use of advanced computers and their integration into the network system, which allow internetworking communications of the production systems leading to the emergence of smart factories.

Smart Factories: The various components involved in the smart factories communicate with each other and mark the inception of total automation. These components are known as Cyber-Physical Systems that employ advanced control systems operated using softwares capable of internet connectivity {Internet of Things and Internet of Systems}, cloud computing and cognitive computing. The efficient communications and availability of information have enabled the digitization of manufacturing systems.

The Germans were the firsts to adopt the Fourth Industrial Revolution, named it I 4.0 when they initiated the projects that promoted the digitization of Manufacturing Systems.

Barriers of Industry 4.0 into Pharmaceutical Industry

It’s very right to say that the pharmaceutical manufacturing industry is not keeping up the pace with the advancing technologies. It is attributable to the stringent regulatory requirements that have slowed down the implementation process. For regulatory agencies, compliance with the existing standards matters more than the adoption of new technologies. It is believed that the pharmaceutical industry is highly regulated, and it can’t be left to machines. But the industry has started to realize the benefits of advanced technologies that can enhance productivity and improve quality at the same time. This hints at the inception of automation in achieving regulatory compliance in pharmaceutical manufacturing.

Evolution of Industry 4.0 to Pharma 4.0

  • Very often, Pharmaceutical organizations experience quality shortcomings that eventually lead to 483 observations and warning letters from regulatory agencies. Every year, approximately 4500 drugs are recalled alone in the USA. This recalling costs a great deal to the organizations.
  • Currently, the pharmaceutical industry is trying to adopt new strategies that can mitigate quality-related incidents. Lean Six Sigma tools are employed to improve product quality in pharmaceutical manufacturing.
  • In 2004, the US FDA published a guidance document entitled “Quality Systems Approach to Pharmaceutical Current Good Manufacturing Practices Regulations” that insisted manufacturers implement modern quality systems and risk-based approaches to meet the expectations of the regulatory agencies.
  • In 2009, ICH Q8 guidelines were revised to incorporate the principles of “Quality by Design”(QbD); it stated that the quality cannot be just monitored but should be built into the product. Despite these measures, quality violations of pharmaceutical products continue to be unabated.
  • The best solution to these problems is the digitalization of platforms. What is required is, the model for the implementation of digitization to the operations. ISPE has pioneered to restructure Industry 4.0 to fit the Pharmaceutical Industry, which is now known as ISPE Pharma 4.0 Operating Model.

Pharma 4.0 Operating Model

Framework of ISPE Pharma 4.0 Operating Model

Enablers

Pharma 4.0 enablers

  • Digital maturity
  • Data integrity by design

ICH derived enablers

  • Knowledge management and risk management

Elements

Pharma 4.0 elements

  • Resources
  • Information systems
  • Organization and processes
  • Culture

The above table depicts the basic structure and framework of the ISPE Pharma 4.0 Operating Model, which consists of two broad components:

  • Enablers
  • Elements

ICH defined Enablers: Knowledge Management and Risk Management

ICH defines knowledge management as a systematic approach to acquiring, analyzing, storing, and disseminating information related to products, manufacturing processes, and components.

The different sources of information include:

  • Product design and development
  • Technology transfer
  • Commercial manufacturing, etc.

The knowledge management of the product and product-related process needs to managed right from the product development through commercial manufacturing up to product discontinuation. It has to be digitalized in the form of  databases and should be connected directly to the raw data sources, which will ensure the data integrity of all GxP and non-GxP data, that  helps in making better choices and build regulatory confidence.

Various In-line, At-line, and On-line tools as used for :

  • Analysis of raw materials.
  • In-process monitoring
  • Final product analysis

These tools can be directly integrated into database systems for real-time data management.

ICH Q9 (Quality Risk Management), also known as the ICH Q9 model, is a fundamental guideline that describes the potential risks to quality that can be identified, analyzed and evaluated.

This guideline is supported by ICH Q10 (Pharmaceutical Quality Systems) which describes a model for an effective quality management system.

The ICH Q10 implementation has three main objectives:

  1. Attain Product Realisation
  2. Develop and Maintain a state of process control.
  3. Ensure continuous improvement.

ICH Q10 provides guidelines regarding critical quality attributes (CQAs) that should be within a specific range to ensure desired product quality. The variables, process parameters, and material attributes that affect the critical quality attributes are referred to as Critical Process Parameters (CPPs) and Critical Material Attributes (CMAs) respectively.

ICH Q12 appends on ICH Q10 to include those parameters which are not critical to quality but are responsible for the overall performance of the product. These attributes are known as Key Process Indicators (KPIs) and continuous efforts should be made to bring the KPIs under six sigma control.

Any excursions or changes in the CQAs, CPPs, CMAs, and KPIs should be communicated to the respective regulatory authorities; prior approval is required in certain cases before the implementation of the changes.

Pharma 4.0 Enablers: Digital Maturity and Data Integrity by Design

The first enabler in Pharma 4.0 to make an organization a smart factory is, Digital Maturity. It specifies the ability and the path of implementation of Pharma 4.0 for an organization. The model is developed in a way such that, an organization can perform gap assessment in terms of its position in digital maturity, improvisations in its capabilities, and based on what future capabilities would be. The basic requirement to achieve digital maturity is computerization and interconnectivity across all the quadrants of the operating models. After fulling these requirements, the organization can move towards advancement by capabilities like data visibility, predictive capacity, and adaptability.

  • Data visibility: A strategy where an organization can acquire, display, monitor, and analyze the data generated across all the sources in the organization.
  • Data Transparency: The ability to access the data no matter what generated it and where it is located.
  • Data Predictability and Adaptability: The ability of the data to predict future outcomes and improve on the predictability as more data is added to enhance the accuracy of the predictions.

These functions of the data help an organization to make a statistically calculated decision as they are based on real-time data.

ICH Q6 (Good Clinical Practices) defines data integrity as the extent to which data is complete, consistent, accurate, trustworthy, and reliable throughout the data lifecycle. The regulatory approval of the drug and all the related process are dependent on the quality and integrity of the submitted data. In the year 2016, USFDA issued a guideline, entitled “Data Integrity and Compliance with Drug cGMP”  that focuses on developing effective strategies for data integrity throughout the life of the drug product.

These strategies should be bases on quantitative risk assessments for patient safety.  Moreover, data integrity should be built into the products and related processes during the design and development; this could be done by introducing digitalization of data integrity known as ‘Data Integrity by Design’. When digitalization will be introduced, every process will have a defined workflow to avoid any silos of information and data integrity relates issues.

Pharma 4.0 Elements:

Resources:

Resources of an organisation refer to the physical and intangible assets owned by an organization, majorly categorized into:

  • Human Resources
  • Machines
  • Products

The Machines employed in Pharma 4.0 should be highly advanced and developed based on Artificial Intelligence and Machine Learning. They would be highly automated and adaptive to the ever-changing business needs of the organization. These machines can be connected to PAT tools for in-line, online, and at-line monitoring during the manufacturing of the products. Such capabilities enable machines in taking their own decisions. But to run these machines, a new generation of highly skilled people is required, these people would be called Workforce 4.0. The success of Pharma 4.0 would largely be dependent on the engagement and continuous upskilling of Workforce 4.0 and the choice of Artificial Intelligence and Machine Learning Platform.

Information

The information system is an integrated set of components for collecting, storing, and processing data and for providing information, knowledge, and digital products. By this means the components relate to each other. This integration forms a basis for:

  • How data is interfaced
  • How processes are Automated
  • How processes have the power for predictive analysis.

The predictive analysis enables the real-time release testing of the products known as “ ad hoc reporting”, which is already being used by some organizations.

The other benefit of integration into information systems is the preventive maintenance of equipment. The equipment takes ownership of its maintenance by analyzing daily data and let the potential maintenance activities be known in the first place and in some cases rectify the abnormalities, this reduces the equipment breakdown time significantly, thus increasing overall productivity. There is more potential area of integration into the information system, but they should adhere to global standards like GAMP5, ISO, etc.

Organization and Processes

An organizational structure needs to be developed which builds processes for substantiating prospective business challenges. Pharma 4.0 is a huge task for the organization and its outcomes are also uncertain, hence a sound and step-by-step organizational structure is required to be developed. The Organisational process needs to be developed across all elements of the holistic control strategy, such that each element functions collaboratively.

Culture

Culture refers to the shared beliefs and values of an organization that help achieve common organizational goals successfully. It should promote collaborative contributions as collaborations drive innovations. A culture where people understand the importance of each Pharma 4.0 element and which percolates down to each stage in the product lifecycle, from the early development to technology transfer and commercial manufacturing, should be developed.  New collaborations should be sought every time to improve on the existing capabilities and acquiring new capabilities. People should be encouraged to adapt to the new changes as upgradation is the requirement of sustenance in the ever-changing market.

Existing Control Strategy vs holistic Control Strategy

  • The existing control strategy was once a game change, which improved quality oversights in the manufacturing, however, to note it just reports quality, i.e, it can tell what has gone wrong, but it cannot predict when and what can go wrong. It puts process control by continuous monitoring of manufacturing processes for the process-related excursions.
  • The Holistic Control Strategy as described by ISPE is based on ICH and Pharma 4.0 enablers and elements that provide control over the production process to ensure a flexible, agile, sustainable, and reliable manufacturing system with lower risks to patients, processes, and products. However, its success depends on the mutual consensus between industry and regulatory agencies.

Barriers to Pharma 4.0

Even though the Pharma 4.0 model might initiate a new era of smart pharmaceutical manufacturing, there are several barriers to the adoption of this model.

 The main barriers involved are:

  • High cost of digitization
  • Time-consuming
  • Skilled and trained workforce
  • Uncertainty of the Outcomes

Despite all these barriers particularly the cost factor, Pharma 4.0 is going to be a reality and the desperate business need for sustainability. At Let’s Excel Analytics Solutions LLP we have developed cloud-based platform technologies that drastically cut down on digitalization costs. Hence, the barriers will be quickly offset by the tremendous increase in productivity and significant reductions in downtimes.

Summary

Pharma 4.0 digitalization is an imperative and inevitable transition that Pharmaceutical Industry is undergoing. To support the smooth transition to Pharma 4.0.

Curious to know about our automation accelerating machine learning platform ?

Demystifying Data Science Terms

Data Science: Demystifying the Terminologies

Introduction

Data Science related terminologies are buzzing around the internet. It marks the onset of the Industry 4.0 revolution. Data Science is a discipline that studies big data, uses modern tools and techniques for data mining and data analysis to find its applications across a wide variety of domains. For example, Google AI retinal scan collected retinal images from thousands of patients across South India. Finally, it analyzed the data to get information about the patients’ disposition to cardiovascular diseases in the next five years.

The statisticians, chemometricians and mathematicians have been breathing and living the data science concepts for years and may not be calling these terms with exactly the same buzz words. Perhaps the spread of the new terminologies is an outcome of massive online Data Science courses or the rebranding strategies of various companies that are trying to bank on the ‘Data Science’ capabilities. Whatever be the reason we need to prepare ourselves for the Industry 4.0 revolution, we should get familiar with these new terms. In this article, we broadly segregated the meaning of these terminologies based on interaction with various clients.

Big Data

Big data, put simply, refers to a collection of data from a wide variety of sources at a colossal scale. The data collected may be quantitative or qualitative, unknown or known, structured or unstructured and so on. As the scale of data collected is enormous, it is stored in specialized databases, known as big databases, that are developed using advanced computer programs such as SQL, MySQL etc. The collections of big data are also referred to as data warehouses. Many big databases are open source, e.g., Cassandra, HBase, MongoDB, Neo4j, CouchDB, OrientDB, Terrstore, etc. However, most of the popular databases are big-budgeted as well, e.g., Oracle, MySQL, Microsoft SQL, SAP HANA, etc. It is essential to state that the database choice is the fundamental and most critical step in the Data Science workflow. The storage requirements of the Big Data can range anywhere between MBs to TBs. Sometimes the data volume may be small, but the data complexity can be high. That is where data engineers pitch in to make things easy.

Data Engineering

The process of building a workflow to store the data in Big Data Warehouse and then extracting the relevant information is called Data Engineering.

Data Mining

Data mining is the process of extracting patterns from large datasets by combining methods from statistics and machine learning with database management. These techniques include association rule learning, cluster analysis, classification, and regression. Applications include mining customer data to determine segments most likely to respond to an offer, mining human resources data to identify characteristics of most successful employees or market basket analysis to model customers’ purchase behaviour.

Data Analysis

Data analysis is the exercise of analyzing, visualizing, and interpreting data to get relevant information that helps organizations make informed business decisions. It also involves data cleaning, outlier analysis, data preprocessing, and transformation to make data amenable to analysis. Data analysis is a very broad term that encompasses at least five different types of analyses. A data scientist chooses the most appropriate data analysis method based on the end goal of the analysis. Sometimes the same method of analysis can be used for the different end goal. Hence, another name may be used to call the technique despite involving the same mathematical and statistical concept. Therefore data analysis takes up various forms described briefly as below:

  • Descriptive statistical analysis is the fundamental step for performing any data analysis. It is also known as summary statistics and gives an idea of the basic structural features of the data like measures of central tendency, dispersion, skewness, etc.
  • Inferential statistical analysis is a type of statistical analysis that uses the information contained in a sampled data to make inferences about the corresponding larger population. It uses hypothesis testing of the data to draw statistically valid conclusions about the population. As the sampling process is always associated with an element of error, statistical analysis tools should also account for the sampling error so that a valid inference is drawn from the data.
  • Chemometrics is the science of extracting and analyzing Physico-chemical information by using spectroscopic sensors and other material characterization instruments. Chemometrics is interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statisticsapplied mathematics, and computer science to address problems in chemistrybiochemistrymedicinebiology, food, agriculture and chemical engineering. Chemometrics generally utilizes information from spectrochemical measurements such as FTIR, NIR, Raman and other material characterization techniques to control product quality attributes. It is being used for building Process Analytical Technology tools.
  • Predictive analysis models patterns in the big data to predict the likelihood of the future outcome. The models built are less likely to have 100% accuracy and are always associated with an intrinsic prediction variance. However, the data’s accuracy is refined each time more and more data is taken into account. Predictive analysis can be performed using linear regression, multiple linear regression, principal component analysis, principal component regression, partial least square regression, and linear discriminant analysis.
  • Diagnostic analysis, as the name suggests, is used to investigate what caused something to happen. The diagnostic analysis uses the historical data to look for the answers that caused the same something in the past. It is more of an investigative type of data analysis. This involves four main steps: data discovery, drill down, data mining and correlations. Data discovery is the process of identifying similar sources of data that underwent the same sequence of events in the past. Data drill down is about focusing on a particular attribute of the data that interests us. This is followed by data mining activity that ends with looking for strong correlations in the data to lead us to the event’s cause. Diagnostic analysis can be performed using all techniques mentioned for predictive analysis. However, the end goal of the diagnostic analysis is only to identify the root cause to improve the process or product.
  • Prescriptive analysis is the sum of all the data analysis techniques discussed above, but this form of analysis is more oriented towards making and influencing business decisions. Specifically, prescriptive analytics factors information about possible situations or scenarios, available resources, past performance, current performance and suggests a course of action or strategy. It can be used to make decisions on any time horizon, from immediate to long term.

Data Analytics

Data analytics is the sum of all the mentioned activities, right from big data, data engineering, data mining to the analysis of the data.

Machine Learning

Machine Learning creates new programs that can predict future events with little supervision by humans. Machine learning analytics is an advanced and automated form of data analytics. A Machine Learning algorithm is called Artificial Intelligence when the prediction accuracy is improved each time new data is added to it.

Machine learning algorithms that uses layered networks capable of unsupervised learning from the data are called Deep Learning algorithms. Examples of Deep Learning algorithms include Deep neural networks or Artificial Neural Networks inspired by the brain’s structure and function. These type of algorithms are designed to be analogous to human intelligence. The major difference between machine learning and deep learning is human supervision, i.e., deep learning algorithms are a completely unsupervised form of learning in contrast to machine learning algorithms.

Conclusion

The field of Data Science is more oriented towards the application side of the modern AI/ML tools that employ advanced algorithms to build predictive models that can transform the future of what we do and how we do it.

Curious to know about our machine learning solutions ?

Predictive Analytics in Cancer Diagnosis

Predictive Analytics in Cancer Diagnosis

Introduction

GLOBOCON 2020, one of the key cancer surveillance projects of the International Agency for Research on Cancer (IARC), published recent statistics of global cancer epidemiology. According to this report, 19,292,789 new cancer cases were reported in 2020 i.e., a two-fold increase in the number of cases as reported in 2018. For over 19 million cases of cancer reported, 9,958,133 cancer-related deaths were reported in the same year.  As per the estimates of the International Agency for Research on Cancer (IARC),  every 1 person in 5 persons is likely to develop cancer during their lifetime. In this article we are going to discuss how Predictive Analytics can play a major role in changing Cancer Statistics.

Cancer Statistics: 2020

 

Males    Females

Population

3,929,973,836

3,864,824,712

Number of new cancer cases

10,065,305

9,227,484

Number of cancer deaths

5,528,8104,429,323
5 year prevalent cases24,828,480

4,429,323

Top 5 most cancers excluding non-melanoma skin cancer Lung, Prostrate, Colorectum, Stomach, Liver

Breast, Lung, Colorectum, Prostrate, Stomach

Data taken from GLOBOCON 2020

Estimated Number of Cases Worldwide

These alarming and constantly rising figures have refocused the attention of medical scientists on the early screening and diagnosis of cancers using Predictive Analytics. Because cancer mortality and morbidity can be reduced by early detection and treatment of cancer.

American Cancer Society (ACS) issues updated guidelines and guidances related to an early screening of cancers to assist in making well informed decisions about the tests for early detection of some of the most prevalent cancers (breast cancer, colon and rectal cancer, cervical cancer, endometrial cancer, lung cancer, and prostate cancer). The early detection of precancerous lesions and cancers is broadly divided into three categories:

Early cancer diagnosis

Cancers respond very well to the treatment only if diagnosed early that, in turn, increases the chances of cancer survival. As per WHO guidance, early diagnosis is a three-step process that must be integrated and provided in a timely manner.

  1. Awareness of cancers and accessing care as early as possible
  2. Clinical evaluation of cancers, appropriate diagnosis and staging of cancers
  3. Access to the right treatment at the right stage.

Screening of cancers

Screening identifies specific markers of cancers that are suggestive of particular cancer. For example, visual inspection with Acetic Acid (VIA) test can be used for early screening of cervical cancers in women. Cervical lesions turn white for a few minutes after application of acetic acid.

However, the early diagnosis and screenings of cancer suffer from drawbacks like false positives, false negatives, and overdiagnosis which may lead to more invasive tests and procedures. To overcome this problem, scientists are using the power of Predictive Analytics based on Artificial Intelligence and Machine Learning.

Introduction to Artificial Intelligence (AI)

Artificial Intelligence (AI) is a great tool for Predictive Analytics and it is defined, in Webster’s dictionary, as a branch of computer science dealing with the simulation of intelligent behaviour in computers. In other words, it is the capability of machine to imitate intelligent human behaviour.

One of the early pioneers of Artificial Intelligence, Alan Turings, published an article in 1950 entitled “Computing Machinery and Intelligence.” It introduced the so-called, Turing test, to determine if a computer can exhibit the same level of intelligence as demonstrated by humans. The term “Artificial Intelligence” was coined by John McCarthy at the Artificial Intelligence (AI) conference at Dartmouth College in 1956. It was Allen Newell, J.C. Shaw, and Herbert Simon who introduced the first AI-based software program namely, The logic Theorist.

Majority of Artificial Intelligence (AI) applications use Machine Learning (ML) algorithms to find patterns in the datasets. These patterns are used to predict the future outcomes.

The basic framework of Artificial Intelligence (AI) consists of three main steps:

  1. Collecting input data
  2. Deciphering the relationship between input data
  3. Identifying unique features of sample data

Introduction to Machine Learning (ML)

Machine learning (ML) is also another tool for Predictive Analytics and is defined in Webster’s dictionary as the process by which a computer is able to improve its own performance by continuously incorporating new data into an existing statistical model. It allows the system to reprogram itself as more data is added and eventually increasing the accuracy of the task assigned.

In the case of Machine learning (ML), it’s an iterative process so that the predictability of the system is improved each time. Most Machine Learning (ML) algorithms are mathematical equations in which sample data is plotted to observed variables, termed as features, and the outcomes termed as labels. The labels and features are used for the classification of different ML tools and techniques. Based on the label type, Machine Learning (ML) algorithms can be categorised into:

Supervised Learning

Unsupervised Learning

Reinforcement Learning

In supervised learning, models are trained based on labelled datasets. For the purpose of prediction, the model needs to map the input variables with the output variables using a know mathematical function. Supervised learning can be used for understanding, Classification and Regression problems.

In unsupervised learning, data patterns are found in the un-labelled data and the endpoint of the unsupervised learning is to find characteristics patterns in the data. Unsupervised learning is used for identifying Clustering and Association in datasets.

Reinforcement learning is the ‘learning’ by interacting with the environment. A reinforcement learning algorithm makes decisions based on its past experiences and also by making new explorations.

The PCA part of MagicPCA 1.0.0. is an unsupervised Machine Learning Approach, whereas the SIMCA part of it is a Supervised Classification Technique.

Rising Interest in Biomedical Research

In the initial years, the journey of AI was not so easy, as can be seen in the period of 1974-1980, which is known as AI winter. During this period the field experienced its low in terms of researcher’s interests and government funding. Today, after decades of advances in data management and superfast computers, and renewed interests of government and corporate bodies, it is a practical reality and finds its applications in a wide variety of fields like e-commerce, medical sciences, cybersecurity, agriculture, space science, automobile industry, etc.

As the phrase “Data Science is everywhere” picked up, biomedical researchers started delving into Artificial Intelligence (AI) and Machine Learning (ML) to look for a better solution through Predictive Analytics. One inspiring story of Regina Barzilay, a renowned professor of Artificial Intelligence (AI) and a breast cancer survivor, portrays how her diagnosis of breast cancer reshaped her research interests. She hypothesized that AI and ML tools can extract more clinical information that helps clinicians make knowledgeable decisions. She collected data from medical reports and developed Machine Learning algorithms to interpret the radio diagnostic images for clinicians. One of the models developed by her has also been implemented in clinical practice that helps radiologists to read diagnostic images very well.

Current scenario: Predictive Analytics in Cancer Diagnosis

The concept of AI/ML has long been employed as Predictive Analytics tool in the radiodiagnosis of precancerous lesions and tumours.

The AI system reads the images generated by various radiological techniques like MRI, PET scan, etc., and processes the information contained in them to assist clinicians make conscious decisions on the diagnosis and progression of the cancers.

Breast Cancer Diagnosis with QuantX

The FDA’s Center for Devices and Radiological Health (CDRH) has approved the first AI-based breast cancer diagnosis system for Predictive Analytics. QuantX was developed by Qlarity Imaging (Paragon Biosciences LLC). QuantX is described as a computer-aided (CAD) diagnosis software system that assists radiologists in the assessment and characterization of breast anomalies using Magnetic Resonance Imaging (MRI) data. The software automatically registers images and segmentations (T1, T2, FLAIR, etc.), and analyses user-directed regions of interest (ROI). QuantX extracts this data from the ROI to provide computer-aided analytics based on morphological and contrast enhancement characteristics. These imaging analytics are then used by an artificial intelligence algorithm to get a single value, known as QI score, which is analysed relative to the reference database. The QI score is based on the machine learning algorithm that is generated from a training subset of features calculated on segmented lesions.

Cervical Cancer Diagnosis with CAD

National Cancer Institute (NCI) has also developed a computer aided program (CAD) that analyses digital images taken of women’s cervix and identify potentially precancerous changes that require immediate medical attention. This Artificial Intelligence-based approach is called Automated Visual Evaluation (AVE). A large set of data, around 60000 cervical images, was generated using the precancerous and cancerous lesions to develop a machine learning algorithm. This algorithm recognizes patterns in visual images that lead to precancerous lesions in cervical cancers. The algorithm-based visualization of images has been reported to provide better insight into precancerous lesions, with a reported accuracy of 0.9, than routine screening tests.

Lung Cancer Diagnosis with Deep Learning Technique

NCI funded researchers of New York University used Deep Learning (DL) algorithms to identify gene mutations from pathophysiological images of lung tumors using Predictive Analytics. The pathophysiological images of lung tumours were collected from the Cancer Genome Atlas and used to build an algorithm that can predict specific gene mutations by visual inspections of the pathophysiological images. This method can very accurately predict the different types of lung cancers and the corresponding gene mutations from the analyses of the images.

Thyroid Cancer with Deep Convoluted Neural Network

Deep Convoluted Neural Network (DCNN) models were used to develop an accurate diagnostic tool for thyroid cancers by analysing images from ultrasonography. 1,31,731 ultrasound images from 17,627 patients with thyroid cancer and 1,80,668 images from 25,325 controls were collected from the thyroid imaging database of Tianjin Cancer Hospital. Those ultrasound images were modelled into a DCNN algorithm. The DCNN model showed similar sensitivity and improved specificity in identifying patients with thyroid cancer compared with a group of skilled radiologists.

AI/ML for Personalized Medicines

Researchers at Aalto University, University of Helsinki and the University of Turku developed a machine learning algorithm that can accurately predict how combinations of different antineoplastic drugs can kill various types of cancerous cells. This algorithm was obtained from data collected from a study that investigated the association between different drugs and their effectiveness in treating cancers. The model developed was found to show associations between different combination of drugs and cancer cells with high accuracy; the correlation coefficient of the model fitted was reported to be 0.9. This AI model can help cancer researchers to prioritize which combination of drugs to choose from a plethora of options for further research investigation. This depicts how AI and ML can be used for the development of personalized medicines.

Future challenges of AI/ML in cancer diagnosis

Data Science is shaping the future of the health care industry like never before. There has been a spurt of growing interests in AI and ML for the diagnosis of precancerous lesions and surveillance of cancerous lesions. The researchers are exploring to develop AI algorithms that help in the diagnosis of many other cancers. However, each type of cancer behaves differently and the consequent changes would be a significant challenge for the algorithms. Machine learning tools can overcome these challenges by training algorithm of these subtle changes. This would drastically improve decision making for clinicians.

One of the biggest challenges of the Artificial Intelligence today is the acceptance of the technology in the real world, particularly related to medical diagnoses of terminally ill patients where decision making plays a critical role in the longevity of the patient. The AI black box problem augments this problem further. AI black box refers to the fact that programmers can see input and output data only but how does an algorithm work is not known.

Regulatory aspects of AI/ML in cancer diagnosis

In 2019, US FDA publishing a discussion paper entitled “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback.” The intention of FDA was to develop a regulatory framework for the medical software by issuing draft guidance on the Predetermined Change Control Plan outlined in the discussion paper. The Predetermined Change Control Plan mapped out a regulatory premarket review for AI/ML-based SaMD modifications.

In 2021, FDA published a draft guidance document entitled Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan.” The FDA encouraged the development of the harmonized Good Machine Learning Practices of AI/ML-based SaMD through the participation of industrial and other stakeholders in consensus standards development efforts. This guidance was built upon the October 2020 Patient Engagement Advisory Committee (PEAC) meeting focused on patient trust in AI/ML technologies.

The FDA supports regulatory science efforts on the development of methodology for the evaluation and improvement of machine learning algorithms, including for the identification and elimination of bias, and on the robustness and resilience of these algorithms to withstand changing clinical inputs and conditions.

Conclusion

The employment of Predictive Analytics in cancer diagnosis has answered major challenges experienced in cancer diagnosis and treatment. It can help early screening of precancerous lesions and avert the mortality rate in cancer patients. AI/ ML provides accurate detection and prognosis of cancers, thereby reducing the incidents of false positives, false negatives and overdiagnosis. These techniques can also be used to track the prognosis of the cancers in the case of immunotherapies and radiotherapies. The AI/ ML has also potential applications in the development of personalized medicines by developing specific therapies for each specific cancers.

By detecting cancers early and accurately, prognosis of cancer treatment would be greatly improved. The early detections of cancers will have a huge impact on the cost-saving of the complicated cancer treatments. This could also have a huge impact on the cancer survival rates as the mortality rates could be drastically decreased on early detection of the cancers.

If you are struggling to make use of cancer data and need help to develop Machine Learning Models, then feel free to reach out to us. At Let’s Excel Analytics Solutions, we help our clients by developing cloud-based software solutions for predictive analytics using Machine Learning.

Predictive Data Science in Food and Beverages

Predictive Data Science in Food and Beverages

Introduction: Predictive Data Science

Predictive data science is no longer limited only to data scientists and engineers. Interested to explore what you can do in the food & beverage industry? Read this article to know how your competitors are leveraging predictive data science to improve their operations. Progress happens with each step taken. With the market becoming more competitive than ever, everyone is eager to find a breakthrough solution. According to a news report on CISION PR Newswire, the global food and beverages market reached a value of nearly $5,943.6 billion in 2019, having increased at a compound annual growth rate (CAGR) of 5.7% since 2015. The market is expected to grow at a CAGR of 6.1% from 2019 and reach $7,525.7 billion in 2023. Meaning that there are massive amounts of data just waiting to be analysed and processed for meaningful insights using predictive data science. Let us now see what data science and analytics can do for the food and beverage industry.

Predictive Data Science in Restaurant Industry:


When talking about the benefits of predictive data science in food, we cannot leave out restaurants. Restaurant owners do not seem to realize the tremendous amounts of data that is generated from their customers. Therefore, there are chances that they miss opportunities to decrease costs and improve customer experience. With the explicit and precise implementation of data science, restaurant owners can obtain real-time analysis of their customers’ data and make the required improvements. For instance, owners and founders can point out their highest selling or most expensive items, the quality of food offered, and more. Based on this data, they can make informed choices, and also fix their mistakes.

Predicting shelf life:


Each type of food has its own shelf-life, causing it to expire over time. However, there are certain items of consumption that only grow better with time. For instance, wine gets better with time, but fresh produce will expire. Different items of food and drink have different shelf lives and managing all of them independently is a major challenge for this industry. The procedure for dealing with wine is very different compared to the procedure for dealing with expired products. But, by incorporating predictive data science into the picture, data engineers can predict the shelf life of produce, thus ensuring pre-emptive action is taken to reduce the amount of waste and saving money and time.


Sentiment Analysis:


Social media, review websites and food delivery apps have allowed the food industry to do something that was not possible in the past, i.e., sentiment analysis. Using NLP, organizations can analyse their social media channels and discover patterns and trends in the data. This will allow them to discover the most popular foods and beverages of any season. It also allows them to discover the popular foods during special occasions and other festivities. Brands, restaurants, and organizations can, in turn, be more receptive to people’s demand and act accordingly. Google analytics can be a helpful medium in this case.


Better Supply Chain Transparency:


Let us now look at another example of how data analytics can benefit growers, transporters, processors, and food retailers:

  • Information about the weather is also entered into the database. Inputs about precipitation and temperatures can be automated.
  • The farmers enter the test results of the soil, along with the planting and harvesting data into a database used by a particular software program.
  • The logistics company that transports the farmer’s crop from the farm to the processing mill, inputs the start and end times for the trip in the database.
  • The food processor enters the start and stop times for various stages of the processing, sorting, washing, packaging, and placing in cold storage can all be tracked with automated sensors.
  • The product is then monitored from the processor to the retailer. Any delays that could cause the food to spoil can be easily identified.
  • At the destination, the vendor can record the quality of the food when it arrives,
  • Customer feedback on social media can also be added to the collected data, to provide further insight to the food supply chain.

The entire supply chain can access this information. If there arises any problems, changes can be made to this process to prevent a further recurrence. Moreover, retailers can choose to accept or reject the shipment based on this data. The software performs an analysis of the data and provides intelligent and accurate conclusions to all parties involved in the supply chain. Analytics software takes the help of a large number of sources for making its analysis, including social media. Both structured and unstructured data is used for the analysis. This collection of data is known as big data.


Measuring Critical Quality Attributes:


There are certain primary attributes against which the food and beverage industry measures the quality of its products. These attributes can be a great asset in marketing them – for example, the alcohol concentration in beer. However, conventional methods of measuring primary attributes are time-consuming. In case of beer, the alcohol level is measured using a method known as near-infrared spectroscopy. This method, however, is time-consuming and delays the production process. Predictive Data science and analytics allows organizations to explore other methods that are faster and more cost-effective, like the Orthogonal Partial Least Square Regression and multiple regression models to measure alcohol content and colour.


Better Health Management:


Consumers wish the food industry to be more transparent. The leading firms of the multi-billion-dollar beef industry realised this when they gathered for Beef Australia 2018, a convention that sees over 90,000 visitors. Consumers expect restaurants and organisations to be more forthright with them. They expect to be completely aware of how the food was produced, how the livestock was treated and what chemicals, if any, were used in the food. They want to be completely aware of what they are consuming. Data science and analytics helps incorporate transparency within these supply chains, so that organizations can be more honest with their customers. Transparency also assists in solving problems related to logistics and supply. For instance, it becomes easier to track contaminated food supplies to their storage locations, thus eliminating the chances of spread of food-borne diseases.
Predictive data science and analytics allows organizations to protect food health and prevent cross-contamination. Geographical data, along with satellite data and remote sensing techniques, allows data analysts to ascertain changes. This information, along with data on temperature, soil property, and vicinity to urban areas, can predict which part of the farm is likely to be infected with pathogens, and take immediate action beforehand. Another excellent example is when cities are short on food inspectors – data analytics can analyse historical data on 13 key variables to help pinpoint the riskiest establishments, making better use of limited food inspectors.

Predictive Data Science for Food Innovations


Organizations need to keep pace with the changing demands of the consumers. With the fluctuation in their tastes according to season, time of day, weather, mood, etc, it becomes crucial for the organizations to take the assistance of predictive data science. This data is then converted into meaningful information which aids in making important decisions, as well as to improve sales and overall performances.

Food Marketing


Predictive data science also assists businesses in improving their marketing campaigns, developing creative and high demand products, and empowering firms to stay updated over their competition’s growth rate, control quality as well as assess decisions regarding purchasing and prices. The data also helps businesses keep track of certain crucial factors, like the quality of their products, by gauging if the composition of the product has been altered in any way.

Conclusion


Predictive data science and analytics has definitely brought about a positive growth in certain industries, including the food and beverage industry. This industry is prone to its fair share of difficulties. With the ever-growing population, consumers are always looking to choose the best option that they can get. Since the consumer is the key, organizations need to make decisions revolving around the consumers’ tastes. Data science enables businesses to derive conclusions about which option will be best suited for the consumers. It allows organizations to collect and analyse data and derive at interesting patterns and trends over a period of time. The technology can also be used to conceive several creative solutions to problems plaguing the industry while bringing positive developments to food and beverage.

Curious to explore Predictive Data Scinece in your work?