Feeds:
Beiträge
Kommentare

Posts Tagged ‘Data Mining’

Air Liquide has established an ambitious programme that will take digitalization of the business to the next level. We spoke to Moussa Diakhité, Real Time Engineer, Air Liquide France Industrie, about the changes in operations and the opportunities and challenges encountered during the transformation.

„With the centralization and digitalization of its operation Air Liquide has changed its usage of the data. The implementation of SIO streams and a contextual database for time series data are part of thoses changes.“

Moussa Diakhité , Air Liquide in his statement on the PRAXISforum Big Data Analytics

In your statement, you say that Air Liquide has changed the usage of data. Could you describe what the most important aspect of this change is – is this how you regard data, is it the combination of data that were treated separately before, or is it the amount of data you analyse – or something completely different?

As for Air Liquid France Industrie, the large industry part has been collecting data for about 15 years. The change is mainly on the analystics we are doing with it, the standardization of it and the new tools such as „SIO Predict“ for Predictive Maintenance, „SIO Perform“ to optimize the unit and „SIO Optim“ with the digital twin we are making out of our data already stored within our servers.

According to press articles, Air Liquide uses the SIOs to run operations remotely and is, among other things, using predictive maintenance methods. What is the relation between the remote centers and the engineers on site – do they have different roles?

As for Air Liquide France Industrie, the distribution of roles on plants were as follows: Plant manager / Production supervisor / Technicians. Now it is as follows: Plant manager / Foreman / Technicians. The plant is focused on  the safety on plant, equipment availability, maintenance and regulatory compliance while at the SIO Center we are taking care of the production optimization, predictive maintenance, customer service and billing for all plants.

Information and registration: https://dechema.de/BigData.html

Thanks to predictive maintenance, we are able to detect the risk of failure of equipment and we discuss with plant people about the best calendar to check the equipment or/and plan a maintenance on it without disturbance for our customer; we are complementary on our mission regarding this subject as the plants still have local expertise on all maintenance subject matter.

As for SIO Optim and Perform, we have tools providing us with the best configuration for our plants; using SIO Drive we are able to apply proposed solutions remotely. But we are still communicating a lot with the plant to ensure that the proposed solution is feasible (no limitation on the water cooling system, equipment availability,…) and for sure we need to unify both SIO Center and plants in a joint team. Explaining the actions done on either side is mandatory to achive the goal of SIO. An e-logbook has been developed regarding that last subject.

You also use big data methods to predict the needs of your customers. How much integration of data and knowledge with your customers’ operations is necessary for this?

Using statistical computing tools such as R we are able to forecast the liquid and gas demand of our customer enabling to plan their consumption. They are also providing us with forecasts of their consumption by mail and by phone.

What do you personally perceive as the biggest opportunity in this whole range of big data applications – more the operative part, more the business part, or is Air Liquide maybe even introducing new business models based on digitalization?

Personnally I think all of this is a big opportunity to make excellent solutions developed by Air Liquide’s experts accessible and easy to implement to the Group. Using those tools we are giving a real legacy to all the work already done.

If you want to learn more about the digital transformation at Air Liquide, meet Moussa Diakhité (and many other experts) at the DECHEMA-PRAXISforum Big Data Analytics in the Process Industry on 9-10 April 2019 in Frankfurt. Last-minute registration is possible on site!

Read Full Post »

Kaum eine Woche vergeht ohne Nachrichten über die fortschreitende Digitalisierung der Prozessindustrie. Zentren für Big-Data-Analysen werden eingerichtet, neue Positionen geschaffen und unternehmensweite Programme ins Leben gerufen, um Digitalisierungskonzepte einzuführen, die weit über die Automatisierung der Produktion hinausreichen. Dem Verband der Chemischen Industrie zufolge plant die chemische Industrie in den nächsten Jahren Investitionen von mehr als 1 Milliarde Euro für Digitalisierungskonzepte und neue nachhaltige Geschäftsmodelle.

Die erwarteten Gewinne scheinen diese Investitionen zu rechtfertigen: „Laut einer Studie von Fraunhofer IAO und dem IT-Verband BITCOM könnte die Vernetzung von Produktentwicklung, Produktion, Logistik und Kunden der chemischen Industrie zu einem Anstieg der Wertschöpfung um 30 % bis 2025 führen. Mit anderen Worten: Wem es gelingt, seine Daten entlang der gesamten Wertschöpfungskette nutzbar zu machen, kann sich einen gewaltigen Wettbewerbsvorteil sichern“, erklärt Mirko Hardtke, Business Development Manager bei der Data Virtuality GmbH.

Keuin Wunder, dass ein enormes Interesse an der Entwicklung von Werkzeugen besteht, mit denen sich der versteckte Schatz in Big Data heben lässt. Große wie kleine Unternehmen und Startups beteiligen sich an diesem neuen Goldrausch: „Für die steigende Prozesskomplexität und den schnelle Anstieg der Datenmengen braucht man neue Technologien zum Umgang mit Daten, vor allem dort, wo Abhängigkeiten zwischen Datensilos bestehen“, sagt Sebastian Dörr, Vice President Sales bei der Conweaver GmbH. Sein Unternehmen hat eine Technologie entwickelt, die die automatische Verknüpfung von Daten aus verschiedenen Quellen ermöglicht und so kontextuales Wissen für Datenowner und -nutzer zugänglich macht. Gleichzeitig dient die Plattform als Grundlage für Big Data Analytics.

(mehr …)

Read Full Post »

SmartSensors

The emergence of new production philosophies, initiated by the FDA’s PAT initiative and in particular the German government’s future project Industry 4.0, has led to a reorientation of sensor technology, which will be trend-setting for biotechnology processes with their special requirements – especially continuous and/or integrated production – in the coming years. Under the maxim of process observability and controllability,a clear trend towards smart sensors with a clear focus on sensor intelligence, decentralization, multi-sensor systems and miniaturization has become apparent.

How to avoid drowning in the data flood

Driven by the state initiative of the future project Industry 4.0, the real and virtual world merge into the Internet of Things. Through intelligent methods of process monitoring and decision making, production processes, companies and complete value chains are to be controlled and optimized almost in real time. In the context of a holistic and sustainable implementation of the vision of the intelligent company, it is particularly important for the field of biotechnology with its high demands on product quality and safety as well as the sometimes highly complex production processes and structures to obtain reliable data for production control. More and more modular, intelligent and networked components must make this data available and via integrated analytical tools take over the simult

aneous evaluation of this flood of data. In this context, smart sensors are sensors that perform complex signal processing tasks in addition to the actual measurement task, can be parameterized and diagnosed and can provide additional information about themselves and the process environment.

Especially for the applicability to biotechnological processes, the following elements of extended sensor intelligence are of essential importance, as they will provide users with greater process reliability as well as cost and time savings:

  • Self-diagnosis, self-identification and reporting of one’s own status
  • Possibility of executing decentralised logic functions (If-Then) and processing complete process functions (only result is reported to PLC) for increasing process reliability and reducing the volume of data to be transmitted
  • Independent validity check of the measured values and adequate information summary
  • Selection and evaluation of process profiles, characteristics and parameters and transfer to status and status messages such as „in control“ or „out of control”
  • direct interaction with assigned actors via decentralised control units
  • Trend determination and prediction of process flows

The extension of sensor intelligence is particularly important for its applicability to biotechnological processes. The possibility of self-diagnosis, self-identification and reporting of the own status should take place in the sensor, so that an independent validity check of the measured values can be transferred to the control system. As a result, the routine testing work in the laboratory can be reduced. Checks are only carried out when necessary and the personnel capacities released as a result can be re-allocated in the company to add value.

Requirements for sensors in biotechnology

The prerequisite for this is an adequate information summary of the available data and suitable data preprocessing with the execution of decentralized logic functions. In this way, the smart sensor can independently record process events and evaluate the determined events using a corresponding functionality (e.g. correlation analyses of abiotic and biotic data).  The actual control is transferred for further processing in the control cycle. The independent process analysis, parameter evaluation and decision making of the individual sensor or in combination, for example as a multi-agent framework, offers enormous potential in terms of optimising and increasing the efficiency of bioprocesses. In particular, the complex topic of population heterogeneity and the use of complex substrate matrices open up a broad spectrum to explore the possibilities and limitations of the smart sensors with regard to innovative population concepts and models.

This text is an excerpt of the position paper “Smarte Sensoren für die Biotechnologie”, DECHEMA 2017. If you want to learn more about the concept, applications and technologies for smart sensors, join the conference „Smart Sensors – mechanistic and data driven modelling“ on 1-2 October 2018, Frankfurt.

 

Read Full Post »

dechema-big-data-small-web2

„More than 50% of the companies are not willing to invest in Big Data Applications. These companies will go bankrupt over the next 10 years“. A provoking statement made by Thomas Froese from atlan-tec, a software developing company specializing on process optimization and data mining. And yet, there are many who share his view: Huub Rutten,Sopheon, points in the same direction when he says that „companies that think Big Data Analytics is something for their marketing department, and not relevant to all other functions, will not have a long life anymore.“

The awareness in the process industry is there: According to Accenture’s Digitals Chemicals Survey in 2014, 79% of chemical industry executive expected digital to make the greatest impact over the next five years through improved productivity. On the other hand, only 58% were embracing digital to gain a competitive advantage over industry peers. Taking the expert statements on competitiveness into account, there seems to exist a potentially dangerous gap – big data is introduced, but it might in some cases be too little, too late.

Predictive Big Data Analytics and Machine Learning will transform most industries by supporting better informed and more customized decisions by both, humans and machines, increasing agility and efficiency, lowering costs, enabling better and more customized products and services, forecasting risks and opportunities, and increasing automation. Early adopters will gain significant competitive advantages, while others are likely to be left behind.

Ralf Klinkenberg, RapidMiner

Big Data analytics has become more and more important to the process industries. Sure, the process industry has always collected data to supervise production processes, to identify disturbances, and to ensure product quality, and this will continue.

The application of data analytics will improve efficiency in the chemical industry!

Matthias Hartman, ThyssenKrupp System Engineering

But with the tremendous increase in data storage and processing capacities, new algorithms have become accessible. They open up new market opportunities, enabling process advantages and cost reductions within the production, decreasing time to market and, ultimately, making new customized and individualized products accessible. The application areas in which data analytics can be used profitably are diverse and in process industries nowhere near exhausted.

In the process industry, digital technology provides actionable solutions to challenges and groundbreaking opportunities for innovation.

Matthias Feuerstein, Microsoft

Provided the data is collected, analyzed and used intelligently and efficiently, as Benjamin Klöpper, ABB Research Center, points out:The major challenge for Big Data Analytics in Process Industries are not scalable architectures or clever algorithms, but remains to have the right data in the right quality to address relevant and pressing issues. Today’s information system infrastructure often prevent us often from obtaining this right data in an efficient data.“ Adds Nico Zobel, Fraunhofer IFF: „One of the major challenges will be to generate use cases of the analysis of data from heterogeneous sources.” And the sources in the chemical industry are nearly endless.

Chemical space is big. Very big. Published and unpublished data only cover only an exceedingly small part of possible small molecule space. Machine learning and algorithmic prediction tools can help fill in the explored parts of chemical space. What kinds of data sources and prediction tools will come next?

David Flanagan,  Wiley ChemPlanner

Jens Kamionka, T-Systems Multimedia Solutions, widens the scope by taking security issues into account – a major challenge in times of increasing cyberattacks: „Many companies still fail to address security, infrastructure or data quality issues.” Concepts that allow one the one hand the company-wide or even inter-company integration of data streams along value chains, at the same time preventing data leaks and cyberattacks, are the intensely sought after.

And once these questions are solved – are we looking at a future with fully automized, smart plants that anticipate customers’ wishes before they are even aware of them? No, according to the experts. Human intelligence will always have its place. „Big Data opens up many opportunities in different areas of a chemical company. Besides toolsets and technology, mind-set and communication skills are additional success factors for these novel approaches”, says Sindy Thomas, Clarian. And Drazen Nikolic from Ernst & Young sums it up:““If you do not understand the problem, you are not able to frame the right question. “

quadratisch-praxisforum_blau_4c

Meet these and other experts and discuss what Big Data analytics might mean for your business at the DECHEMA-PRAXISforum “Big Data Analytics in Process Industry”, 1-2 February 2017, Frankfurt. For the full program and registration, click here DECHEMA-PRAXISforum Big Data Analytics in Process Industry

Read Full Post »

„Miss alles, was sich messen lässt, und mach alles messbar, was sich nicht messen lässt.“ Diese Empfehlung von Galileo Galilei wird in der Verfahrenstechnik befolgt. Auf der Basis von Messungen werden Prozesse kontrolliert und gesteuert und Vorhersagen über das Ergebnis gemacht; die Prozesseffizienz und die Kosten werden optimiert, die Qualität gesichert und Fehlerquellen identifiziert. Moderne Analytik, nicht zuletzt die Fortschritte in der online- und inline-Messtechnik, sorgen dafür, dass zu jedem Zeitpunkt riesige Datensätze verfügbar sind. Aufbauend auf diesen Datensätzen bieten eine ganze Reihe von KMUs Softwarelösungen für die Prozessoptimierung an.

Bevor jedoch aus einem Datenwust nutzbare Information wird, müssen die Rohdaten aufbereitet und analysiert werden. Ein kritischer und aufwändiger Schritt dabei ist die Vorbereitung der Daten für die weitere Auswertung: Daten aus verschiedenen Quellen müssen ins gleiche Format gebracht werden, unter Umständen sind Kalibrierungen und Korrekturen notwendig, um Vergleichbarkeit von Messdaten zu gewährleisten. Ein einfaches Beispiel dafür sind die Korrekturen, die nötig werden, wenn bei verschiedenen Temperaturen gemessen wurde.

Im Rahmen eines Projekts der industriellen Gemeinschaftsforschung soll eine wissensbasierte Methode zur Datenaufbereitung entwickelt werden. Dafür sollen bekannte Verfahren für die Datenaufbereitung dokumentiert und katalogisiert werden, so dass jeder Nutzer einen Überblick über die verschiedenen Möglichkeiten bekommt. Um diesen Katalog möglichst umfassend und praxisnah zu gestalten, werden reale Fallbeispiele aus der Industrie genutzt.

Eine entsprechende Demonstrations-Software soll es ermöglichen, die Daten verschiedener Messinstrumente zusammenzuführen und mit den unterschiedlichen Methoden aufzubereiten, so dass sie anschließend ausgewertet werden können. Besonders wichtig ist dabei, dass der Nutzer zu jedem Zeitpunkt nachvollziehen kann, was mit den Daten geschieht bzw. geschehen ist, und dass die einzelnen Schritte möglichst konsistent und anhand bekannter Algorithmen vollzogen werden. Auch etwaige Fehler und Datenlücken sollen möglichst frühzeitig sichtbar gemacht werden.

Die Demonstrationssoftware kann dann als Ausgangspunkt für kommerzielle Entwicklungen dienen, die den Aufwand für die Datenaufbereitung, der heute noch etwa 80% des Gesamtaufwands für die Analyse ausmacht, erheblich reduzieren könnten.

Mehr zum Projekt

Read Full Post »