Biography:
Mateusz Malinowski is a Staff Research Scientist at DeepMind. His current work focuses on the intersection of computer vision, natural language understanding and reinforcement learning. He received his PhD (summa cum laude) from Saarland University. During his studies, he worked at Max Planck Institute for Informatics where he pioneered research in the area of Visual Question Answering which gained wide recognition in the scientific community. He was awarded multiple rewards such as Dr.-Eduard-Martin-Preis for the best doctoral candidate at Saarland University, and DAGM MVTec for the outstanding dissertation in Machine Learning, Computer Vision, and Pattern Recognition in German-speaking countries. He gladly shares his knowledge with others as a speaker during conferences and serves as a reviewer for highly respected AI journals.
Abstract:
Current computer vision research mainly focuses on individual images or short videos due to the challenges associated with processing longer videos. However, experience and reasoning often occur across multiple temporal scales ranging from milliseconds to days.
This talk has three parts. In the first part of the talk, I will briefly present various achievements in the multimodal space including agents navigating streets and a multimodal approach that uses vision to improve translation between languages.
In the second part of the talk, a new framework will be presented that enables the processing of hour-long videos with the same hardware that can currently handle second-long videos. This approach uses neural compression to replace standard video compression techniques, resulting in improved efficiency at all pipeline levels and faster training of models on much longer videos. To address the issue of precluding standard augmentation techniques when processing compressed signals, a small network is introduced that can apply transformations to latent codes corresponding to commonly used augmentations in the original video space.
The final part of the talk will introduce Transframer, a generative approach that can be used to solve various computer vision tasks using the same probabilistic formulation. I will also demonstrate our preliminary results of generating a 30-second natural video sequence.
by Agnieszka Słowik | Machine Learning Researcher | Microsoft Research
Show infoBiography:
Agnieszka Słowik is a Postdoctoral Researcher working on human-centric, inclusive and responsible AI at Microsoft Research Cambridge. Prior to joining Microsoft, she did her PhD in out-of-distribution generalisation in machine learning at University of Cambridge, with particular focus on learning from multiple training distributions and compositional generalisation.
During her PhD, she did several research internships at Mila, Meta AI and Microsoft Research. She published at top machine learning venues, such as AISTATS and AAAI, and co-organised various scientific events -- most recently, ML in PL 2022, The 5th Workshop on Emergent Communication @ ICLR 2022, and Oxbridge Women in Computer Science Conference 2022. She received multiple awards for her work and volunteering, including the Wiseman Prize from the University of Cambridge Department of Computer Science and Technology, Young AI Researcher 2022 Award at Perspektywy Women in Tech Summit, and Myson College Exhibition for Personal Achievement from Lucy Cavendish College, University of Cambridge.
Apart from research and engineering, she enjoys teaching, science communication, outreach activities, hiking, cross-country skiing, reading, cerebral films, travelling (36 countries and counting) and learning foreign languages.
Abstract:
When applied to a new data distribution, machine learning algorithms have been shown to deteriorate. Distribution shifts are caused by spurious correlations that hold at training time but not at test time, changes to the domain, as well as under- and over-representation with respect to future data. In this talk, I present three research studies that aim to investigate and develop the field of out-of-distribution generalisation across different machine learning scenarios: learning from multiple training distributions, image classification and emergent communication in referential games. These studies lead to a series of open questions and opportunities for future work. The talk concludes with a discussion of relevance of these questions in the time of mega models such as GPT-4.
by Konrad Banachewicz | Staff Data Scientist | Adevinta
Show infoBiography:
Konrad Banachewicz is a Staff Data Scientist at Adevinta and a former Data Scientist at eBay. During his 20 years of industry experience he has worked with a variety of institutions on a wide range of data analysis problems. He holds a PhD in statistics from Vrije Universiteit Amsterdam where he worked on modeling problems in credit risk. Konrad is also a double Kaggle Grandmaster and a very active community contributor who is committed to sharing his knowledge with others through articles, videos and who has published the "The Kaggle Book", a guide to machine learning for competitive data science.
Abstract:
Analysing the customer journey in an e-commerce website is essential for understanding the user's experience and identifying areas for improvement. The customer journey refers to the path a user takes from initial engagement with a website or product to final purchase; placing an ad, personalised recommendations experience, safe and secure communication, delivery options - all those can be assisted with Machine Learning, so that a customer walks away happy and does come back. A shared thread is four-fold:
1. Identify pain points
2. Improve user experience
3. Increase conversions
4. Retain customers
An important aspect is KYA: Know Your Audience - depending on what kind of visitor you are (casual browser? Power user? Disappointed one, on the verge of leaving?), there are different things we can do for you, and for that it is of paramount importance that we understand what you are telling us through your behavior. In this talk, we will look at some applications of the modern AI toolkit that can help us achieve the objective: from binary classification, through transformers, up to contrastive learning, the ML journey is bumpy at times, but ultimately very much worth it .
by Eyke Hüllermeier | Professor of Computer Science | LMU Munich
Show infoBiography:
Eyke Hüllermeier is a full professor at the Institute of Informatics at
LMU Munich, Germany, where he heads the Chair of Artificial Intelligence
and Machine Learning. He studied mathematics and business computing,
received his PhD in computer science from Paderborn University in 1997,
and a Habilitation degree in 2002. Prior to joining LMU, he held
professorships at several other German universities (Dortmund,
Magdeburg, Marburg, Paderborn) and spent two years as a Marie Curie
fellow at the IRIT in Toulouse (France). His research interests are
centered around methods and theoretical foundations of artificial
intelligence, with a specific focus on machine learning, preference
modeling, and reasoning under uncertainty. Besides, he is interested in
the application of AI methods in other disciplines, ranging from the
natural sciences and engineering to the humanities and social sciences.
He has published more than 400 articles on related topics in top-tier
journals and major international conferences, and several of his
contributions have been recognized with scientific awards. Professor
Hüllermeier serves on the editorial board of leading journals in AI and
machine learning and is a regular member of the program committee of
major AI conferences. Currently, he also serves as president of EuADS,
the European Association for Data Science.
Abstract:
The notion of uncertainty has recently drawn increasing attention in
machine learning research due to the field's burgeoning relevance for
practical applications, many of which have safety requirements. This
talk will address questions regarding the representation and adequate
handling of (predictive) uncertainty in (supervised) machine learning.
The distinction between two important types of uncertainty, often
referred to as aleatoric and epistemic, and how to quantify these
uncertainties in terms of appropriate numerical measures will be
highlighted. While aleatoric uncertainty is caused by the inherent
randomness in the data generation process, epistemic uncertainty is
caused by the learner's ignorance of the true underlying model.
by Sebastian Dziadzio | PhD Student | University of Tübingen
Show infoBiography:
Sebastian is an ELLIS/IMPRS-IS PhD student at the University of Tübingen. Prior to returning to academia, he conducted research in industry, where he worked as a scientist in Microsoft's Mixed Reality and AI Lab, and an AI Resident at Microsoft Research Cambridge. In a previous life, he spent several years working as a software engineer. His primary research interest lies in efficiently learning compositional representations in computer vision.
Abstract:
Synthetic data in machine learning is attracting increased interest, as it frequently offers a faster and cheaper alternative to traditional data collection and annotation in applications ranging from autonomous vehicles to healthcare. It is typically used to supplement real data and often requires domain adaptation to bridge the gap between simulation and reality. In this talk, I will show that it is possible to perform real-world face analysis tasks, such as landmark detection and semantic parsing, with synthetic images alone and without domain adaptation. In addition, I will outline several other applications of synthetic data and discuss how it helps overcome legal, ethical, and privacy issues that arise when collecting real data and enables new approaches in cases where manual labelling is impossible.
by Pola Schwöbel | Applied Scientist | AWS
Show infoBiography:
Pola got her PhD in 2022 from the Technical University of Denmark doing research on probabilistic machine learning, invariance learning and AI ethics. She has recently joined AWS in Berlin as an Applied Scientist where she works on responsible AI.
Abstract:
Machine learning algorithms work by extracting patterns from large amounts of training data and projecting correlations forward. This mechanism by which ML works becomes a problem when training data is scarce: In the low data regime, performance usually drops dramatically. Data augmentation (DA), i.e. the practice of introducing additional, synthetic data into the training set is one way to mitigate this problem, and has become a standard tool in the deep learning pipeline. DA is usually performed ad hoc, i.e., based on assumptions rather than using a statistical model. Such ad hoc DA cannot guarantee optimal augmentation and might be difficult to do for data that is not easily interpreted by humans. This talk formalises the practice in order to provide a principled model for learning data augmentation using Bayesian techniques. We close with an outlook on how data augmentation can play a role for algorithmic fairness and de-biasing ML algorithms.
by Viacheslav Klimkov | Applied Scientist | Grammarly
Show infoBiography:
Graduated with Master degree from NTUU "KPI" in Kyiv in 2013, worked in Samsung (2012 - 2014), Amazon (2015 - 2023), Grammarly (2023 - now). Mostly experienced with speech technologies with gradual shift towards natural language processing.
Abstract:
Generative modeling is an exciting area of deep learning that has the ability to convert randomness into meaningful outputs. It is rapidly expanding and pushing the boundaries of artificial intelligence, becoming an increasingly important area of study in various industries. In this presentation, we will provide an overview of four main approaches to generative modeling: auto-regressive models, generative adversarial networks, normalizing flows, and diffusion models. We will highlight the key concepts behind each approach, and demonstrate how they are used in practice.
Biography:
Maria Ganzha is working as Associate Professor at the Warsaw University of Technology. She has received her M. A. and Ph. D. in mathematics at the Moscow State University in Russia, and a Doctor of Science degree in Computer Science from the Polish Academy of Sciences. Maria has published nearly 200 scientific articles, is editor in chief of a book series and a member of the editorial boards of 5 journals. She has been invited to the program committees of over 250 conferences. Her scientific interest is focused on agent-based technology, semantic technology, and machine learning. She is Technical Coordinator of EU project ASSIST-IoT, and principal investigator of SRIPAS team in another EU project aerOS. From 2020 she is a President of Mazovian Chapter of Polish Information Processing Society (Polskie Towarzystwo Informatyczne).
Abstract:
Proliferation of digital artifacts with various computing capabilities, development of very high speed networks, along with the emergence of edge computing paradigm, offer new possibilities for the development of Machine Learning solutions. One of the most popular of them is Federated Learning (FL). However, edge computing brings also a number of problems that need to be dealt with. For instance, in real-life deployments, sudden client dropout, low/varying network bandwidth, or divergent local data distributions need to be considered. Interestingly, some of these problems can be partially managed by selecting an appropriate FL topology. In the talk different aspects of realization of Federated Learning in real-world-grounded scenarios will be discussed.
by Michał Kuźba | Machine Learning Scientist | Expedia Group
Show infoBiography:
Michał Kuźba is a Machine Learning Scientist at Expedia Group in Switzerland. He has more than 5 years of professional and research experience in Machine Learning and Data Science in companies such as Facebook, Microsoft and Ml2DataLab. He describes himself as an enthusiast of Machine Learning, Natural Language Processing, explainability and human aspects of AI and Data Science.
His recent projects include: “Automatic processing and trends extraction from documents on the topic of AI regulations and Responsible AI”, “Conversational explanations of Machine Learning models using chatbots” and “Travel recommendations using graph embeddings representation”.
When not expanding his knowledge, Michał shares his expertise with other people. He is also passionate about cartography, geography and travelling.
Abstract:
At Expedia, we offer travel products on a global scale.
It means we deal with numerous hotels and even more hotel rooms.
But what do we do if we want to compare them, cluster, or learn relations and representations?
In this talk, I will talk about similarity learning for hotel rooms.
We will compare multiple techniques, including the classification approach, siamese networks and triplet loss. We will see how to learn the similarity function directly or through representation learning (embeddings).
We will also see how different perspectives (traveller, hotelier, domain experts) can provide various ground truths for the model to learn.
Finally, we will show the benefits of using interpretability techniques in model development and model selection.
by Jan Kanty Milczek | Principal Data Scientist | deepsense.ai
Show infoBiography:
Award-winning computer scientist and data analyst. One of the founding members of deepsense.ai. He specializes in applying data structures and algorithms in artificial intelligence.
Abstract:
"GPT can be extremely impressive, but we turn that impression around with a few carefully crafted prompts. The wrong answers are given with an almost human-like confidence.
We will discuss the various mistakes of GPT models in relation to their training processes."
by Sebastian Cygert | ML Researcher | Assistant Professor | IDEAS NCBR | Gdańsk University of Technology
Show infoBiography:
Sebastian is a postdoctoral researcher at IDEAS NCBR and also an assistant professor at the Gdańsk University of Technology, where he earned his PhD. Previously, he was employed as an Applied Scientist at Amazon and contributed to projects such as the visual perception system for the autonomous robot Amazon Scout. He has extensive experience in a variety of computer science topics and has worked for Moody's Analytics on mathematical modeling. His research focuses on the real-world generalization and efficient computation of machine learning algorithms. In addition, he is collaborating with the Medical University of Gdańsk on a project aimed at early cancer diagnosis through the use of liquid biopsies.
Abstract:
Continual Learning is an emerging paradigm in machine learning where models learn new tasks in sequence without access to previously acquired knowledge. The key challenge in this field is addressing the problem of catastrophic forgetting of previous information. This is crucial because in many real-world scenarios, models must be able to adapt to constantly evolving data and tasks. During this presentation, I will discuss the latest developments in Continual Learning research and explain why it is an important research area of focus for us at IDEAS NCBR.
by Michał Bień | Research Engineer | Allegro
Show infoBiography:
Michał is a Research Engineer at Allegro Machine Learning Research team and a graduate of EPFL and Poznań University of Technology. During his career, he worked on a variety of ML projects, notably in the field of natural language processing, text generation, and recommender systems - both in academia and industry.
Abstract:
Did you know that Allegro, the largest e-commerce platform in Poland, is now operating in multiple languages? The key step for a successful international expansion is having accurate and fast translations of product offers and descriptions. In this talk, we are excited to present how the Machine Learning Research department of Allegro developed an in-house Machine Translation solution, beating the most popular commercial translation tools on chosen language pairs and bringing significant cost savings to the company every day. This achievement has been made possible through a combination of careful data preparation, modern evaluation metrics, and cutting-edge ML techniques, such as deep transformer-based models, and knowledge distillation.
by Agata Hanas-Szadkowska & Mateusz Otmianowski | AI Scientists | Pearson
Show infoBiography:
Agata works as Senior AI Scientist in Pearson. She has couple of years of experience in Artificial Intelligence/Machine Learning. Her professional interests are connected with Natural Language Processing. She obtained her Master degree in Applied Mathematics from Wroclaw University of Science and Technology. In her free time, she loves hiking and travelling.
Mateusz Otmianowski is a Principal AI Scientist at Pearson. He is focused on solving education problems with Machine Learning solutions. Before joining Pearson, he worked in various roles in e-commerce and the financial sector.
Abstract:
Models from GPT family offer state of the art performance on many tasks and are accessible through simple API. This makes them an interesting technology to experiment with and build products around. We believe that one of the more interesting areas to use GPT models is education, as they make it easier to deliver personalized learning at scale. That is why we spent much time experimenting with GPT models in the past months, using them in various contexts like learning content generation and as teaching agents. In our talk we would like to share lessons learned from those experiments, discuss what worked and what didn’t and point at the areas we believe still need to be researched.
Biography:
Juggling work, family, friends, health and soul.
I'm passionate about the intersection between computing infrastructure and artificial intelligence. With experience in the IT, Financial and Pharmaceutical industries I have a wide spectrum of the different challenges that need to be taken into account when exposing ML in production. I'm totally inspired by my current role since I can contribute back to society in a more direct manner by helping accelerate drug discovery from my MLOPs engineer trench.
I work with Python, Kubeflow and GitLab, and I'm constantly looking for new cloud-native technologies that can help us bridge the gap to production while minimizing maintenance. Part of a novel team of MLOPs engineers at Roche with a common passion and a clear goal.
Abstract:
Machine Learning Operations (MLOps) has been maturing over the last two years, and we are finally beginning to observe the rewards of focus and synergy from multiple groups and we finally have produced an enterprise MLOps reference architecture. The road towards this company success has been though trial and error and after investing multiple resources into this effort. This talk will focus on the roadmap towards obtaining such a reference architecture, summarizing lessons learned, technologies used and present a reference architecture.
by Kacper Dominiak | GPU Software Development Engineer | Intel
Show infoBiography:
Kacper is a Graphics Software Development engineer at Intel, where he is involved in various computer vision-related projects. His main area of expertise and interest is in image processing, where he has already authored several projects. Before joining Intel, Kacper worked on object detection at FlexLink, where he was responsible for developing and improving the object detection system. Kacper is passionate about technology in general and gaining new skills. In 2021, he earned his Bachelor's degree in Automatic Control and Robotics from Poznan University of Technology.
Abstract:
There are many types of visual corruption. The simplest such as a Black Screen, are easily detected using the basics of Computer Vision. Everything changes when we are going to detect visual corruption in games. Sometimes Black Screen is part of gameplay, sometimes not. As problems become more complex, recognizing them comes to checking multiple images at once. In that case, Machine learning becomes the only solution.
by Piotr Biliński | Senior Applied Scientist | Amazon Alexa TTS
Show infoBiography:
Piotr Biliński is a Senior Applied Scientist at Amazon Alexa. He obtained his PhD in Computer Science with first class honours from University of Nice in 2014. Then, he was a Postdoctoral Research Fellow at INRIA and a Postdoctoral Researcher at University of Oxford. Further professional appointments include work at Microsoft Research, Honeywell Laboratory, as AI mentor at NASA FDL and FDL Europe, and as Assistant Professor at University of Warsaw. His research interests include deep learning and machine learning. His latest results deal with generative models and disentangle representation learning for text-to-speech and video generation. Piotr has authored US patent and 30+ research papers at main speech/audio, computer vision and machine learning venues (ICASSP, Interspeech, CVPR, AAAI, IJCAI, NeurIPS, ECCV, WACV, and others). He serves as chair and reviewer in major computer science conferences and journals. He is a member of IEEE, AAAI and Computer Vision Foundation. He supports the work on the Polish AI strategy and he is an expert of National Science Center and Foundation for Polish Science.
Abstract:
In this talk I will focus on the concept of speech disentanglement for controllable text-to-speech synthesis. Firstly, I will briefly give introduction to text-to-speech and explain why is it so important research area. Then, I will explain what speech disentanglement is. And then, I will dive deeper into text-to-speech technology and how we leverage deep learning and various speech disentanglement technologies to control text-to-speech synthesis process.
by Albert Wolant & Kamil Tokarski | Deep Learning Software Engineers | NVIDIA
Show infoBiography:
Albert Wolant is software development engineer working on DALI team at NVIDIA. He has experience both in Deep Learning and GPGPU software development. In the past he did research work in parallel algorithms and data structures.
Kamil received his master's degree from the University of Warsaw, where he focused on formal verification of smart contracts.
Abstract:
Data augmentation is a crucial technique in deep learning, allowing us to enrich training data from existing samples by applying various transformations to the input images. This technique not only increases the effective size of the training dataset but also improves the model's robustness to variations and reduces overfitting. However, designing an effective set of augmentation operations (policies) manually can be a challenging task, and the optimal policy can vary from dataset to dataset.
AutoAugment is an approach to automatically search for the best data augmentation policies for a given dataset and task using reinforcement learning. By applying machine learning to itself, AutoAugment can discover new augmentation policies that can significantly improve the performance of deep learning models.
In this presentation, we will focus on the importance of data augmentation. We will demonstrate how various automatic augmentation techniques can be integrated with NVIDIA DALI on GPU, enabling fast and effective data augmentation for deep learning models.
In summary, this presentation will provide a comprehensive overview of how automatic augmentations work and how to efficiently run them on the GPU with the help of NVIDIA DALI. We will showcase how these techniques can significantly improve the performance of deep learning models and discuss the importance of data augmentation for achieving state-of-the-art results.
by Karolina Wadowska | Data Scientist | PhD Candidate | Citi Handlowy | AGH
Show infoBiography:
Karolina has been working a Data Scientist since 2017, mainly in the area of machine learning for the financial sector. The first models she built dealt with predicting the departure of the bank's corporate clients and their behavioral segmentation. As part of her PhD studies, she is currently working on the prediction of corporate financial condition using deep learning and NLP methods.
Abstract:
One of the characteristics of working with textual data is its vagueness. Capturing the semantics of language requires more advanced models, and the very concept of fuzzy logic can be the basis for creating such technical solutions. The presentation will therefore cover the ways of working with fuzzy logic-based language processing methods.
by Jędrzej Świeżewski | Head of AI | Appsilon
Show infoBiography:
Jędrzej leads the Machine Learning team at Appsilon. His focus centers around applications of computer vision techniques in the analysis of human shape and motion. Additionally, he is involved in numerous collaborations with life scientists and in particular ecologists, in which computer vision is used to aid the study of life on earth. More about the ML projects ran at Appsilon can be found here: https://appsilon.com/ai-research/
He started his career as a theoretical physicist, made a turn into data science and finally focused on computer vision.
Privately, Jędrzej enjoys skiing, as well as ice-skating, and tries to turn any conversation into a discussion about one of those.
Abstract:
During the talk I will present recent results concerning counting nests of Antarctic birds in images captured from drones.
The study is based on magnificent data (with spatial resolutions reaching 0.6cm/px) which contains crucial information about the well-being of the still understudied ecosystem of the Antarctic. This in turn, is vital to monitoring the rapid changes the ecosystem is undergoing due to the climate crisis.
I will discuss the statement of the problem (interesting application of object detection), the key challenges and the solutions we have developed to tackle them.
The results were obtained by the Appsilon team in collaboration with scientists from the Arctowski Polish Antarctic Station.
I will wrap-up the talk with a quick overview of the recent results of projects the Appsilon’s ML team was involved in globally: regarding Arctic oceans and African savannas - I will be happy to discuss them during the conference.
by Roman Prykhodchenko | Data scientist | WIDMO Spectral Technologies
Show infoBiography:
Roman Prykhodchenko - Data Scientist at WIDMO Spectral Technologies, Ph.D. candidate at the AGH Doctoral School. Passionate about building complex systems, ML, and computer vision, and its application in the domain of Ground Penetrating Radar data. Have a strong interest in automation and data visualization. Have a background in autonomous robotics and computer vision. Mentor of the 4th Robotic Craftsmanship International Academy in 2019.
Abstract:
We are a company that has been developing the innovative Spectral Ground Penetrating Radar (SGPR). Our aim is to revolutionize the current state of underground detection by application of cloud, machine learning, and deep learning solutions with connection to our state-of-the-art SGPR device’s data.
Our work focuses on applications of computer vision, ML, and DL techniques in the analysis of Spectral Ground Penetrating data, and data visualization. Data that are recorded by radar are complex and usually require some level of expertise in the domain of geophysics to be analyzed, labeled, and understood. The tangible information, which our customers get from the spectral ground penetrating radar is helping them to answer the question of what is underground without using invasive methods or having prior knowledge in the geophysics domain.
During the talk, we will try to answer the following questions:
- What are we looking for underground?
- What type of data and how to process SGPR data before training ML models?
- How to get labeled datasets based on knowledge discovery and other techniques?
- Which model architecture to use?
- How to evaluate model predictions?
by Łukasz Rączkowski | Senior Research Engineer | Allegro
Show infoBiography:
Senior Research Engineer in the Machine Learning Research team at Allegro, where he works on applying and advancing NLP methods in the e-commerce domain. He obtained his PhD from the University of Warsaw, in which he focused on machine learning methods for histopathology.
Abstract:
The problem of label noise is critically important in supervised learning - after all, when the training data is corrupted, it may be hard to recover the correct training signal. A simple solution, i.e. more thorough data annotation, is unfortunately rarely possible in practice, as machine learning projects have both time and cost constraints. As such, it is necessary to accept the presence of noise in training data and develop robust models. We evaluated a set of methods that increase the robustness of machine learning models to various forms of label noise in the context of extreme text classification. This setting is novel for the problem of label noise, as the majority of existing research focused on small-scale image classification. We show that simple symmetric noise is easy to overcome even for very high noise ratios, while more complex noise patterns are much harder to deal with. In addition to experimental results, we discuss the relation between robustness, memorization and training dynamics.
by Łukasz Gątarek | Researcher | Tinbergen Institute
Show infoBiography:
Data scientist with extensive professional experience at the intersection of statistical inference and machine learning. His research path oscillates closely around time series analysis and stochastic processes. During his consultancy career Lukasz has advised global companies in key projects related to data management and machine learning.
Abstract:
The problem of applying artificial neural networks (ANN) to nonstationary trending time series remains practically unsolved. No improvements to the optimization procedures proposed so far have proven fully satisfactory. We present a new and original solution involving the transformation of original data into a new series in form of a sequence of jumps defined as time distances between succeeding maxima reached by a variable. This new series is representable by a compound Poisson distribution and is stationary in contrast to the original data generating process. Simulation experiments confirm that the jump process can serve as a feature of the ANN. It retains the most important information about the nature of the original time series, particularly about the composition of stochastic and deterministic trends causing nonstationarity.
by Barbara Sobkowiak | Data Science Manager | Britenet
Show infoBiography:
For the past 10 years I have been involved in data analytics in the broadest sense, starting with spatial data and later moving on to more standard data e.g tabular data. For the past 4 years I have been leading a team as Data Science Manager, where we implement projects for our clients holistically, i.e. starting with the analysis of requirements and business processes and ending with the implementation and, if necessary, maintenance and development of the solution. In my work I am a bridge between the world of business and the world of technology. I specialize in translating technical and analytical issues into business language.
Abstract:
For many people, time series is often a rather trite topic, and forecasting sales or orders is trivial. And either the issue ends with ARIMA or, conversely, there is a belief that it is enough to throw the data into neural networks. After all, it can't be that hard, since logistics and retail companies somehow manage to do it, and on Kaggle the data is so nice.
Nothing could be further from the truth. With an assortment of a few dozen or thousands products, several to several thousand locations, and daily granularity, the issue is no longer so simple.
When you're modeling a demand forecast, a number of questions arise - how to deal with infrequent sales, what evaluation metric to choose, how much impact business decisions have on the process, and others. What can happen on this type of project, what will surprise you with the data and the business and what are the options for solving them I will tell you in the presentation.
by Tomasz Pędzimąż | NLP R&D Expert | Techmo
Show infoBiography:
Tomasz Pędzimąż is a machine learning professional with expertise in natural language processing and speech recognition technologies and methods. As the Head of Development Team, he leads to the creation of innovative technologies used in voicebot products and solutions.
He specializes in statistical language models, which he applies to improve the accuracy and performance of speech recognition systems. He leads a highly skilled machine learning team that provides market-ready services in the speech domain.
Abstract:
Developing a human-like voice for diverse business applications is an ongoing challenge. In this presentation, we will journey through the continuous advancements of Techmo TTS, highlighting new features and enhancements such as IPA alphabet support, phonetic lexicons, improved prosody, SSML tags usage, and voice personalization.
We will examine the role machine learning plays in managing these features and elevating the overall TTS experience. Additionally, we will introduce TTS Studio, a custom web app offering a suite of tools designed to optimize accessibility and quality across various prompts and applications. TTS Studio delivers natural-sounding voices, real-time synthesis, and other valuable features that enrich the TTS experience.
Attendees will gain insightful knowledge of the latest innovations in Techmo TTS technology and learn how to leverage these advancements in their own applications.
by Marcin Mosiolek | Chief Data Scientist | SII Poland
Show infoBiography:
Marcin Mosiolek is the Chief Data Scientist at SII, where he coordinates the work of AI engineers on automated document reading projects. He also leads a novel research project related to adding new objects to already recorded videos. He has been working in the industry for more than 12 years, developing successful AI-based products for both startups and large enterprises, including large-scale document processing and several search-related products. Marcin has also led initiatives in autonomous driving and augmented reality.
Abstract:
ChatGPT has brought more attention to natural language processing than any other AI model. However, relying entirely on ChatGPT’s responses is extremely risky due to its tendency to hallucinate non-existent facts, which effectively precludes its use in critical domains such as healthcare or law. Fortunately, it’s possible to take advantage of the excellent conversational capabilities of modern large language models while restricting their responses to strong evidence present in a set of specific documents. Retrieval-Augmented Generative Question Answering is an approach that combines retrieval-based methods, such as semantic search, with conversational models, making the above possible. We used this approach to augment our in-house semantic search with a chatbot that provides answers in natural language.
The talk aims to present practical applications of concepts such as large language models, semantic similarity of text, semantic search engines and finally retrieval augmented generative question answering. The presentation shares experience collected in a customer facing projects.
by Karol Piniarski | Machine Learning and Computer Vision Consultant | Lingaro | Poznań University of Technology
Show infoBiography:
Karol Piniarski has been specialising in computer vision for almost 10 years. Involved in many research and commercial projects in the field of image generation, deep learning, object detection, image compression and ADAS. Machine Learning and Computer Vision consultant at Lingaro Sp. z o. o. Additionally, in constant cooperation with the Poznań University of Technology (PUT) as an assistant professor at the Institute of Automation and Robotics. He received his doctoral degree in 2022 at the Poznań University of Technology.
Abstract:
Over the past year, there has been an incredibly rapid development of model architectures that generate higher quality images. Therefore, the new opportunities arise for business applications, but also challenges that must be addressed. In our project, we propose utilisation of automatic image generation for the marketing purposes. We prepare innovative framework that includes many steps: natural-language processing, text to image generation based on recent diffusion models, fine-tuning, super-resolution and reranking. Thanks to this, we are able to automatically generate advertisement images adopted to a given customer segment. This project is supported by the National Centre for Research and Development with grant number: POIR.01.01.01-00-0062/20.
Biography:
Currently a Data Scientist (with NLP) at deepsense.ai which is one of the largest and most recognized companies in Poland preparing complex solutions to practical customers' problems using both state‑of‑the‑art solutions and implementation of own ideas in the field of artificial intelligence. Master's degree in artificial intelligence obtained in September 2022 at Poznan University of Technology. Honoured with the prestigious award/medal of the university awarded by the Rector to the selected top 5 graduates of the university. Presenter of a self-developed method achieving state-of-the-art results for imbalanced data at the poster session of an international conference (MLinPL). Machine learning enthusiast, interested in continuous development, and deepening both theoretical and practical knowledge. Eager to undertake difficult challenges and tasks.
Abstract:
Aspect Sentiment Triplet Extraction (ASTE) is a task in natural language processing that focuses on automatically identifying aspect phrases, opinion phrases, and sentiment polarity from a given text. The goal of ASTE is to accurately extract and categorize the opinion expressed in a text, which is crucial for sentiment analysis and opinion mining applications. This work presents a new approach to ASTE. The approach, which involves a learnable extractor of spans and a differentiable span selector, improves upon existing methods by capturing interrelations between spans during model output construction. Experiments show that this approach outperforms other state-of-the-art methods in terms of the F1-score.
by Michał Tyrolski | Deep Learning Researcher | DeepFlare | the University of Warsaw
Show infoBiography:
Michal Tyrolski is a Deep Learning Researcher at DeepFlare and the University of Warsaw. He graduated from the University of Warsaw and currently works in Piotr Miłoś' research lab on reinforcement learning. Michal's areas of interest include meta and model-based reinforcement learning. He has previously worked at companies such as Nvidia, Microsoft, and Samsung.
Abstract:
Complex reasoning problems contain states that vary in the computational cost required to determine a good action plan. Taking advantage of this property, we propose Adaptive Subgoal Search (AdaSubS), a search method that adaptively adjusts the planning horizon. To this end, AdaSubS generates diverse sets of subgoals at different distances. A verification mechanism is employed to filter out unreachable subgoals swiftly, allowing to focus on feasible further subgoals. In this way, AdaSubS benefits from the efficiency of planning with longer subgoals and the fine control with the shorter ones, and thus scales well to difficult planning problems. We show that AdaSubS significantly surpasses hierarchical planning algorithms on three complex reasoning tasks: Sokoban, the Rubik's Cube, and inequality proving benchmark INT.
by Sylwana Kaźmierska & Joanna Piwko | Data Scientists | Digica
Show infoBiography:
Sylwana has 5+ years of experience, mostly in the image domain (super resolution, synthetic images, augmented reality) and predictive maintenance. She’s been working mostly in the automotive, semiconductor, telecommunication and financial sectors, for companies like AMD, T-Mobile, TomTom, AT&T. Sylwana is also a content creator and a former TEDx speaker on the topics of Machine Learning.
Joanna has 4+ years of experience in research and commercial application of ML solutions in such fields as chemistry, radars, network traffic, sales and military. She has a broad ML overview having developed projects in NLP, preventive maintenance, computer vision and - most lately - point cloud computing. Joanna has a biomedical and computer science university background.
Abstract:
Point clouds enable us to present the shape of objects in 3D space, being an omnipresent type of data used for space mapping in meteorology, Earth observation, or autonomous vehicles, to name only a few. Just like in typical Data Science projects, point clouds are subject to classification and segmentation tasks. What’s not trivial though is, how to deal with noise and bad quality of such data during those projects? It turns out that such a topic is still almost undiscovered. The majority of research on it focuses mostly on artificially added Gaussian noise, which is simply not enough. Such noise is too simplistic and doesn’t cover real-world scenarios, such as ours: point cloud denoising of mobile built-in sensor outputs. Thus, we had to develop some solutions on our own. Fortunately, we discovered that real noise is quite an interesting problem to tackle. Firstly, because data quality can be affected by such physical phenomena as wave interferences, and knowledge of that can really help. And secondly, because when we apply deep learning models on top of that (Pointfilter being one of them), those methods can actually work.
by Sebastian Białkowski | Head of Data Science | Pixel Technology
Show infoBiography:
He completed his architectural studies with elements of programming at Dessau International Architecture, Hochschule Anhalt in Germany. He was a DAAD scholarship recipient. In 2018, he defended his PhD with honors at the Lodz University of Technology, presenting his work on ""Optimization of Continuum Material Topology in Search of Architectural Forms. Adaptation of Selected Engineering Methods for Architectural Purposes"". Since 2019, he has been working at Pixel Technologists, first as a Data Scientist, and a year later, he took on the position of Head of Data Science, leading the Data Science team. During this time, he has been involved in the development of tools for anatomical segmentation of the brain, lungs from various studies, including MRI and CT. He is a co-author of an automatic system for detecting changes typical for the Sars-Cov-2 on CT images. Since 2021, he has been a certified Scrum Master.
Abstract:
Multiple Sclerosis (MS) is a debilitating neurological disease that affects millions of people worldwide. Tracking the progression of MS is critical to providing timely and effective treatment to patients. However, traditional methods of monitoring MS using MRI can be time-consuming and subjective. In response, we developed the FLAIR Lesions and Atrophy Report (FLAAR), an automated pipeline that provides neurologists with fast and reliable tracking of MS progression.
FLAAR pipeline is based on a AirFlow DAG, allowing for comparative analysis of brain MRI images using FLAIR, T1w, and T1w-Contrast Enhanced images. The system's core is composed of two UNet-like neural networks that have been trained on an in-house dataset by highly-trained radiologists. These models enable accurate segmentation of both anatomical brain structures and demyelinating lesions. To validate the stability and repeatability of our anatomical brain segmentation solution, we conducted a Test-Retest experiment on over 40 healthy brain MRIs. In addition, we created percentile grids of healthy population brain structure volumes using the GAMLSS model to enhance atrophy tracing. This information enables neurologists to better compare patient trends and disease progression. The results of our research are converted to HL7 FHIR standard and published to the hospital VNA system, where specialists can evaluate results on interactive web-based panels. By automating MRI analysis and providing accurate and reliable tracking of MS progression, FLAAR is poised to become an indispensable tool for neurologists working with MS patients. Our innovative approach to MS monitoring using FLAAR provides a fast and reliable solution that can improve patient outcomes while saving valuable time for medical professionals. We believe that our research has the potential to revolutionize the way MS is monitored and treated.
by Rafal Pytel | Senior Machine Learning Engineer | Reasonfield Lab
Show infoBiography:
Rafal Pytel is a Senior Machine Learning Engineer at Reasonfield Lab. He received MSc in Computer Science from the Delft University of Technology under the supervision of Jan van Gemert from Computer Vision Lab, with expertise in Human Pose Estimation. His research focus lies in a variety of fields, from fine-grained Image Classification to Semantic Segmentation. In his professional career, he got experience in applying deep learning solutions in the agriculture and automotive industry. In his free time, Rafal enjoys windsurfing, skiing and travelling around the World.
Abstract:
Have you ever wondered what this famed Explainable AI (XAI) is? Did you also wonder about their usage in other domains than dogs vs cats? Or you heard something about GradCAM and wondered how you could use it in your project. In this presentation, using the brand-new, rapidly developing library FoXAI, you will learn about some algorithms (not only GradCam) used for Computer Vision (e.g. Object Detection or Human Pose Estimation), how to use them and what kind of adjustments they need for domains other than Image Classification. Additionally, we will explore some mistakes in explanations and some methods to quantify the quality of the produced explanation without the need for the tedious and expensive work of domain experts. To check the library before the talk, explore https://github.com/softwaremill/FoXAI.
by Dominik Lewy | Data Science Expert | Lingaro
Show infoBiography:
Dominik has over 8 years of hands-on experience in Machine Learning, Deep Learning, Data Exploration and Business Analysis projects primarily in the FMCG industry. He is a technical leader setting goals and preparing road maps for projects. He is also a PhD candidate at Warsaw University of Technology where he focuses on the study of neural networks for image processing. He tries to be a bridge between commercial and academic worlds. His main research interest is digital image processing in context of facilitating adoption of deep learning algorithms in business context where training data is scarce or non-existing.
Abstract:
The presentation covers recent advances in data augmentation via mixing images for computer vision. It builds understanding of the two main classes of augmentation (pixel-wise mixing & patch-wise mixing) and pass intuition on what types of problems those are most useful for as well as to what other modalities (e.g. text) those can be generalized to. All this is accompanied with visual aspect (what does the resulting augmented image look like?) and empirical evaluation of the methods. The presentation is based on a recent survey (https://link.springer.com/article/10.1007/s10462-022-10227-z).
by Michał Mikołajczak | Tech Lead & CEO | Datarabbit.ai
Show infoBiography:
Michal Mikolajczak is a founder and Tech Lead at datarabbit.ai – data/machine learning focused software house, that helps organizations utilize their data and gain competitive advantage by designing, building, and shipping AI and data driven solutions for their businesses. Due to working there on a variety of projects from different industries, he possesses a broad range of diversified ML experience, with a focus on its productization. But his primary background is image processing – for a couple of years he was working in the medical imaging field, including being a CTO of a startup that was successfully acquired by a NASDAQ company. Privately a big fan of BI/any kind of data visualization systems that allow storytelling with data and Pratchett works enjoyer.
Abstract:
AI and ML technologies have been on the rise in recent years, developing and popularizing very rapidly. But in the recent past, it turned out, that there are multiple ML-specific challenges other than just models’ performance that are important when bringing ML systems to production. As data is the lifeblood of ML solutions, many of those challenges are associated with it – a related key area on which we will focus during this talk is data versioning. Keeping a track of the data utilized and versioning it in a reliable way is mandatory in some of the regulated industries (e.g. medical devices), but due to the value it brings, it starts to become a necessity in more mature/production projects – and incredibly helpful addition even in small/personal ones. During this talk, we will go through the key concepts of data versioning, what value (and how exactly) it brings, as well as important factors and challenges one might need to consider when dealing with its practical implementation.
by Jakub Nowacki | Senior Data Scientist | Amazon Ring
Show infoBiography:
Maths and computers guy and a leader working in R&D. Graduated from Gdańsk University of Technology and University of Bristol, where he obtained PhD in Engineering Mathematics. In his day-to-day work he enjoys combining his analytical expertise with programming skills, especially as to Machine Learning. Through his career he managed R&D teams in many companies working in fields of technology, media, pharmaceutical and petrochemical. He is mostly interested in implementing math and science in business environment. Jakub originally has C/C++ background but currently works mostly in JVM and Python world. Finally, he is an educator in topics connecting engineering and science in the field of Data and AI/ML.
Abstract:
Recommender systems are by all means a quite mature topic in the AI/ML community. Nonetheless, they are of a paramount importance for business, bringing direct value for marketing, sales, customer satisfaction to name a few. In the talk I will review the modern approaches to recommender systems from both scientific and business world and point out which algorithms are useful for a particular business problem. Finally, I will present how we approach a less typical recommendation problem we have in Amazon Ring.
by Patryk Rygiel | AI Engineer | Student | Hemolens Diagnostics | Wrocław University of Science and Technology
Show infoBiography:
I hold a BSc degree in Algorithmic Computer Science and am currently in the 2nd year of MSc studies in AI at Wrocław University of Science and Technology. At the university I lead an ML team in the research group SOLVRO. I work as an AI engineer at the med-tech company Hemolens Diagnostics. My main area of research and interest lies in applications of geometric deep learning, mainly neural networks dealing with 3D point clouds and meshes, in medical image processing. I am an author of two papers in this domain which were published at MLMI 2022 and GeoMedIA 2022 respectively - both conferences endorsed by MICCAI.
Abstract:
The development of automated medical image processing methods is an essential step in providing non-invasive accurate diagnostic tools. Medical images, such as Computed Tomography (CT) or Magnetic Resonance Imaging (MRI), take the form of 3D voxel grids, for analysis of which Convolutional Neural Networks (CNNs) are a default choice. In recent years, many novel techniques for less commonly used 3D data structures in deep learning, such as point clouds and meshes, have been proposed. These approaches already have proven to be a great complement to the CNNs in the field of medical image processing. In this talk, I would like to introduce the concept of neural networks used to process 3D point clouds via the blueprint of geometric deep learning and showcase the most common architectures PointNet and PointNet++. I will discuss the benefits of using point clouds instead of voxel grids with a focus on medical image processing. At last, I will showcase applications of point clouds in vascular medical images by discussing my publication at GeoMedIA 2022 titled Eigenvector grouping for vessel labeling and ongoing research in this area.
by Jan Gruszczyński | Artificial Intelligence Engineer | PUT, PUM
Show infoBiography:
I'm Jan Gruszczyński and I work as an Artificial Intelligence engineer at WTT where I specialize in unsupervised learning. After work I try to actively connect my secondary degree studies in AI with my passion for biology and medicine, by pursuing scientific projects connecting natural sciences with machine learning in collaboration with Poznan University of Medical Science.
Abstract:
For the past year, me and my colleagues have been working with a team of pathologists, trying to develop a practical nuclei instance segmentation tool for histopathological tissue slices.
Join our presentation to learn more details such as:
- Explore with us all of the unexpected technical but also formal difficulties that were encountered.
- Learn about the usefulness of deep learning in the field of pathology but also about the feasibility of bringing such solutions to production.
- Discover how we collaborated with a team of physicians to create one of its kind dataset, in a multi stage pipeline.
- Study our trials to beat current state-of-the-art solutions in the field.
by Dawid Plaskowski | Student | Poznań University of Technology
Show infoBiography:
Dawid Plaskowski is enrolled as a master’s student of Artificial Intelligence at Poznań University of Technology. Before pursuing his degree, he worked as a data scientist specializing in drone swarms and predictive maintenance. Currently, he's an R&D specialist in Natural Language Processing at Łukasiewicz - Poznań Institute of Technology where his expertise stems from his experience as the leader of the “Transformer” seminar group at GHOST. His research interests are: large language models, reinforcement learning and interpretability.
Abstract:
-
by Adam Handke | Student | AI Intern | Poznań University of Technology | InterLAN
Show infoBiography:
I'm a final year MSc Eng. student of Computing with AI specialization at Poznań University of Technology and part-time AI Intern at InterLAN. My research interests are broad and include machine learning applications, preference learning, metaheuristics, natural language processing and computer vision. The AI Tech project is a vital part of my studies and allowes me to engage in many extracurricular activities, e.g. participation in ECML 2022. During my work at InterLAN I focus on developing machine learning models which enhance our software solutions for logistics companies. In my free time I love to play the clarinet in a concert band.
Abstract:
In order to determine an efficient route for a shipping truck, InterLAN's optimization software needs to possess detailed information about the delivery. This includes an estimation of how much time will the vehicle spend at the loading/unloading facilities, or the so-called 'dwell time'. The task was approached as a classic regression problem with tabular input data acquired from InterLAN's SPEED application. As it was raw real-life freight data, it required thorough cleansing and feature engineering before use. The main challenge was obtaining ground truth values of dwell time as they were not recorded directly. It was solved by comparing the GPS log of the truck with known coordinates of shipment's origin and destination. Finally, a wide selection of machine learning algorithms was tested and promising results were achieved by the best alternative - a stacking ensemble of models based on random forest and gradient boosting approaches.
by Michał Szafarczyk | Student | AGH University of Science and Technology
Show infoBiography:
Michał is a 3rd year Computer Science student at AGH UST. Member and lecturer of BIT student organization at AGH. Recent winner in AI category at BITEhack 2023 hackathon. Enthusiast of ML and its applications in medicine. Personally, fan of sailing, skiing and classical music.
Abstract:
Many real-world datasets in machine learning have mixed type variables: numerical, categorical nominal and ordinal, and even asymmetrical binary features. Most algorithms only support numerical variables, requiring one-hot encoding and introducing problems such as high sparsity and curse of dimensionality. In this talk, we present an alternative for both supervised and unsupervised learning problems: Gower metric. It is highly flexible, supporting all common variable types, and we additionally present original ideas for improving its performance. We present practical applications to classification and clustering of mixed variable type datasets. Also, we describe our journey from proof-of-concept Python implementation to a highly efficient version, further accelerated by Numba, providing tips on improving performance.
by Mateusz Politycki & Dawid Plaskowski | Students | Poznań University of Technology
Show infoBiography:
Mateusz is a master's student at the Poznań University of Technology and currently works as an AI Engineer Intern at Wunderman Thompson Technology. He is mainly interested in Natural Language Processing, but he also deals with Computer Vision tasks on a daily basis and has the opportunity to explore a broad range of fields, from Stable Diffusion to Graph Neural Networks.
Dawid Plaskowski is enrolled as a master’s student of Artifficial Intelligence at Poznań University of Technology. Before pursuing his degree, he worked as a data scientist specializing in drone swarms and predictive maintenance. Currently, he's an R&D specialist in Natural Language Processing at Łukasiewicz - Poznań Institute of Technology where his expertise stems from his experience as the leader of the “Transformer” seminar group at GHOST. His research interests are: large language models, reinforcement learning and interpretability.
Abstract:
While the popularity of question answering chatbots is on the rise, many problems in these types of systems still remain unsolved. In our work, we tackled with the knowledge hallucination problem, which stands for when the model confidently provides non-credible answers to questions, as well as the problem of outdated knowledge, stemming from the time cutoff on the model’s training dataset. We approach these using a selection of prompting techniques, using n-shot learning, information retrieval, and multi-stage answering strategies.
We present our implementations of the methods in the form of a Python library and an Open System web application. Furthermore, we also show the results of experiments on various prompting techniques performed on question answering benchmarks OpenBookQA and HotpotQA.
The authors of the work are: Dawid Plaskowski, Mateusz Politycki, Marcin Korcz and Alex Terentowicz