Speakers

Demystifying AI: From 'Hello World' to Building Your First Intelligent App
This session is designed for absolute beginners. We will strip away the jargon to understand what AI actually is, how it 'thinks,' and look at accessible tools like Google Gemini and Vertex AI to start building. I’ll share my journey of building projects like the BinWise AI system to show how anyone can move from an idea to a working prototype.

AGENTIC AI IN 2026: HYPE, REALITY, AND THE ROAD AHEAD
Agentic AI is either the biggest shift since the internet—or the most over-marketed concept of the decade. The truth sits somewhere in between, and that's where the real opportunity lives. In this keynote, I will share what is actually working inside enterprises today, the uncomfortable gaps between demo and deployment, and the strategic bets leaders should be making for the road ahead.

Fine-tuning own LLMs
The landscape of LLM customization. We will dive into the theory of various training paradigms, computational considerations, and the foundational strategies that make model fine-tuning both possible and efficient, comparing it with alternative approaches.
Implementing the fine-tuning pipeline. Moving from theory to practice, this session breaks down the standard workflow of training a custom LLM. We will examine the key stages of the process: from structuring input data to running the training cycle and assessing model performance.

AI Code Generation, AI agents, Copilot, Cursor, Cloude Code, etc.
An overview of modern AI-powered development approaches, including code generation with LLMs, AI agents, and the emerging concept of Vibe Coding. The session will cover popular tools such as Copilot, Cursor, Claude Code, and others, highlighting their capabilities, limitations, and real-world use cases.

Deutsch–Jozsa algorithm
Quantum computing emerged from the realization that classical computers struggle to
efficiently simulate quantum systems, as noted by Richard Feynman in the 1980s. This
led to the development of quantum algorithms that exploit superposition and
interference to achieve computational advantages over classical methods.
The Deutsch–Jozsa algorithm demonstrates this advantage by determining whether a
Boolean function is constant or balanced using a single query, whereas any classical
algorithm requires exponentially many evaluations in the worst case.

Securing AI systems. How to break and protect LLMs, and AI agents.
In 2026, AI security is no longer just about fighting "hallucinations" but countering real exploits.
In this session, we will move from theory to practice and examine how LLM architectural features translate into attack vectors.
We will delve into the anatomy of Indirect Prompt Injection and discover why even the most powerful models from tech giants like Google,
can become a tool in the hands of an attacker.

Machine Learning for Pricing Crypto Prediction Markets
We will demonstrate how machine learning models can be used to price a BTC up/down prediction market using simple price-based features.

Why machine learning doesn’t work and where the real problem is
It’s easy to build a machine learning (ML) model that looks good in a presentation. It’s harder to make it support good decision-making. Harder still to turn it into meaningful business value. Based on my experience, I’ll talk about what really determines success in ML-driven projects.

Querying the Enterprise Brain: Building RAG Agents Over Structured & Unstructured Data
This session will cover the architectural principles of Retrieval-Augmented Generation (RAG) and the design of agentic AI systems that operate across both structured (SQL, data warehouses) and unstructured (documents, knowledge bases) enterprise data. I will discuss practical implementation patterns using LangChain and AWS Bedrock, including a live demonstration grounded in real-world enterprise use cases.

From Foundations to Frontiers: Deep Learning and Explainable AI for Anomaly Detection
This series of sessions provides a structured journey from fundamental concepts of artificial intelligence to advanced applications in anomaly detection. It introduces deep learning techniques for identifying abnormal patterns in real-world data, followed by a hands-on session on building detection models. The advanced component focuses on explainable AI, demonstrating the use of Integrated Gradients to interpret model decisions and enhance trust in critical applications such as healthcare and industrial systems. Participants will gain both theoretical insights and practical experience in developing reliable and interpretable anomaly detection systems.

From Microservices to Microcontrollers: Message-Driven Design for Sensor-to-Cloud Systems
Modern sensor-to-cloud systems are often described in terms of protocols, connectivity, and cloud analytics. In practice, however, many of the hardest engineering problems appear in the middle: how data is acquired, transformed, routed, and delivered under tight timing and resource constraints. This talk argues that some architectural principles known from microservices — clear responsibilities, message-driven flow, and modular decomposition — can still be useful on constrained microcontrollers, even though the deployment model is very different.
The session presents this idea through two experimental open-source projects: Hako, a Ruby runtime for Zephyr RTOS built on mruby/c, and Takagi, a CoAP-oriented communication layer for IoT systems. Rather than presenting them as a finished framework, the talk uses them as a case study in building a higher-level execution model above RTOS primitives. I will discuss where low-level RTOS mechanisms end, where application-level runtime design begins, and how tasks or callbacks can be interpreted as agent-like units in a deterministic message-driven flow. This perspective may be especially relevant for edge AI systems, where execution structure and timing can matter as much as the model itself.

From Scaling Laws to Agentic AI: Building Intelligent Systems with LLMs
This keynote provides a high-level overview of the evolution of large language models, from scaling laws to agentic AI, and explores how these advances are shaping the development of intelligent systems.
Enhancing LLM Performance: Prompting Techniques and Retrieval-Augmented Generation (RAG)
This session explores techniques for enhancing LLM performance, focusing on effective prompting strategies and the use of Retrieval-Augmented Generation (RAG) to improve accuracy and reliability.
Agentic AI Systems: From Language Models to Autonomous Decision-Making
This session introduces agentic AI systems, exploring how language models can be extended with reasoning, tools, and memory to enable autonomous decision-making and multi-step task execution.

From Chaos to Clarity: Stabilizing Legacy Systems to Enable AI-Ready Data Pipelines
The case study focuses on a document-processing pipeline that ingests scanned PDFs, performs OCR using Tesseract, extracts structured data through a combination of regex and large language models, integrates human corrections, validates outputs, and publishes results to end users. Despite its business importance, the system lacked architectural clarity, observability, automated testing, and clear data lineage, while being split across multiple repositories with hidden dependencies.
The transformation begins with architecture recovery and end-to-end data flow mapping, followed by the introduction of step-level metrics stored in PostgreSQL and visualized in Grafana. The pipeline is then unified into a single repository and supported by a golden dataset and regression testing framework to ensure output consistency during refactoring. These steps establish a solid foundation for Dagster-based orchestration and future scalability.

Applications of AI in Learning
As generative AI rapidly reshapes the landscape of education, this session explores how AI can be applied to move learning beyond passive content consumption toward curiosity-driven, dialogic, and learner-centred experiences. Drawing on practical experimentation and system design, the presenter delivers complementary perspectives on using AI both as an engagement catalyst and as a personalised learning companion.
The first part shares an innovative pedagogical approach that gamifies learning through structured competition between ChatGPT and the teacher. By positioning AI as a contested “knowledge player” rather than an authoritative answer engine, students are motivated to question, verify, and outperform both human and machine. This design leverages students’ natural curiosity while strengthening disciplinary understanding, critical thinking, and essential soft skills. The talk discusses the underlying design rationale, classroom implementation, assessment considerations, and empirical feedback from students and teachers, demonstrating how carefully framed AI use can enhance motivation without undermining learning integrity.

Big Measurement Data: Artificial Intelligence for Time Series in Real-World Applications
Modern distributed Internet of Things systems offer unprecedented measurement and data collection capabilities, at increased temporal and spatial resolutions, which serve the development of high fidelity and (almost) infinitely complex digital twin representations of the physical world. Real-time data processing of these heterogeneous, multi-scale, data streams requires coordination with control loop performance requirements, under strict robustness and reliability constraints. The talk discusses the current approaches for data pre-processing, feature extraction, learning and optimization of real-world datasets in industry and energy scenarios. It provides a comparative overview of classical, machine learning and deep learning time series methods and most recently LLM implementations for time series data, such as TimeGPT, which lay at the intersection of statistics, econometric, computer science and engineering. For implementation purposes, well-established programming and scientific computing libraries and frameworks that can be used to extract information and lead to accurate characterization of the underlying dynamic processes, with replicable and computationally efficient results, are introduced. Relevant case studies will focus on microgrids scenarios, where effective labelling and classification of microscale features can lead to improved energy management. Experiences gained in recent collaborative research projects are also highlighted (EMERGE, INFUSE, ECOM4FUTURE, NOVETROL).

Building Practical AI Agents for Real-World Applications
This session will explore how to design and implement AI agents for real-world use cases, focusing on practical architectures, tooling, and deployment strategies. It will include examples of automation workflows, integration with existing systems, and lessons learned from production environments.

Computer vision with ML
Computer Vision with Machine Learning is an introductory-level course that provides students with the fundamental principles of how machines process and interpret visual information from images and video. The lecture introduces core concepts of digital image representation, basic image preprocessing, feature extraction, and essential machine learning methods used for visual analysis. Students will explore foundational tasks such as image classification, object detection, and simple object tracking, while gaining an initial understanding of neural networks and convolutional neural networks for vision applications. The course emphasizes the basic theoretical concepts and practical intuition required for further study in computer vision, artificial intelligence, and data-driven image analysis.

AI Agents in Security: Guardrails, Boundaries, and Real Lessons Learned
As generative and agentic AI rapidly enters security workflows, practitioners face a difficult choice: leverage AI to enhance efficiency, or risk exposing sensitive data to systems that weren’t designed with security in mind. For me, as a security engineer and pentester, this challenge is especially important - pentesting often involves confidential data, proprietary code, and high-impact vulnerabilities that cannot leave controlled boundaries.

Cloud options for Big Data processing
Students will learn data storage, batch and stream processing, and analytics. The session covers how major cloud providers enable scalable processing.
Building the Big Data conveyors
Students will explore how to design and build Big Data pipelines that move data from sources to storage and analytics systems. The session focuses on data ingestion, transformation using distributed frameworks with practical examples of end‑to‑end Big Data workflows

Transformers, BERT, GPT models family (GPT-1, 2, 3, 3.5, 4), LlaMA, DeepSeek ChatGPT
usage, ChatGPT API, tokenisation, creation and usage of GPTs
We will cover the evolution of large language models — from the Transformer architecture and BERT to the full GPT family (GPT-1 through GPT-4), as well as open-source alternatives like LLaMA and DeepSeek. Attendees will get a clear understanding of how tokenisation works and why it matters for working with these models effectively.

Building for the Real World: The Full-Cycle Engineering of AI Agents
Moving an AI project from a local prompt to a production-ready system requires more than just code, it requires a rigorous engineering lifecycle. In this talk, I will pull back the curtain on the "Build-Test-Deploy" pipeline for AI Operating Systems. We will explore how to approach AI in any industry using a structured four-pillar process:
- Strategic Discovery: Understanding the "Fit" and defining the boundaries of an AI system.
- Architectural Design: Structuring memory, tool-calling, and external integrations.
- The Deployment Gap: Solving the challenges of latency, reliability, and edge-case handling in live environments.
- Case Studies: Brief insights from diverse automation builds from data-driven business intelligence to real-time communication systems.

Applied LLMs, Agentic AI & LLM Security: Building and Defending Intelligent Systems for Industry
LLMs have crossed the threshold from research curiosity to industrial backbone, and the organisations that understand how to apply, orchestrate, and secure them are the ones defining the next decade of technology. In this session, we begin at the foundation: how LLMs actually work in production, how Prompt Engineering and Output Control turn raw model capability into reliable, structured, industry-grade behaviour, and how Retrieval-Augmented Systems (RAG) bring real-world knowledge into the loop. From there, we step into the Agentic frontier, where LLMs stop being tools you query and start being systems that plan, reason, call APIs, use external tools, and autonomously execute complex multi-step workflows with minimal human intervention. Finally, we confront the question that the industry is only just waking up to: what happens when these powerful systems are attacked? We will dissect the real threat landscape of deployed LLMs: Prompt Injection, Jailbreaks, Data Poisoning, Model Extraction, and Adversarial Manipulation, and walk through the architectural and operational defences that separate robust AI systems from vulnerable ones.

Google Cloud Platform
This lecture introduces Google Cloud Platform (GCP) and its core services. It explores key cloud services such as Compute Engine, Cloud Storage, and Infrastructure Manager, and explains how GCP supports scalable and cost-efficient processing. The session emphasizes core principles of cloud computing, providing an overview of how GCP enables the development of solutions.

Knowledge base and storages, Vector DB, Ontology
Beyond the database: How do we build 'memory' for AI? Join us for a deep dive into Knowledge Bases, Vector DBs, and Ontologies. We will move past simple data storage to explore how semantic indexing and logical relationships are used to eliminate LLM hallucinations and power the next generation of autonomous agents.

Cloud computing basics, SaaS, PaaS and IaaS
This lecture introduces the fundamentals of cloud computing and its core service models. Participants will explore the differences between SaaS, PaaS, and IaaS through practical examples and use cases. By the end, they will understand how to choose the right cloud model for different business and development needs.

Building an Agent-to-Agent Negotiation Network: When AI Agents Discuss Real-World Challenges
I'll share how we built a multi-agent system where autonomous AI agents negotiate real-world deals: salary packages, corporate acquisitions, and SaaS contracts, inclusing a regulator agent enforcing fairness in real time. I'll introduce the essential building blocks of Agent-to-Agent interaction, including LangGraph for state machine orchestration, structured prompting for distinct agent personas, and config-driven scenario design. We'll run live demos where flipping a single hidden toggle visibly changes the AI's reasoning and deal outcome. Finally, I'll show how everyone can build their own AI agent and connect to our open-source community to experiment, contribute, and push the boundaries of what multi-agent systems can do.

Building Production AI in the Global South: Lessons from Shipping Real AI Products with Limited Resources
This talk covers the practical realities of building and deploying AI systems in a resource-constrained environment; no dedicated GPU, unreliable infrastructure, API quota limits, and tight budgets. Drawing from real experience shipping ClosetAI (an AI-powered wardrobe assistant) and PinPoint (a real-time delivery tracking tool solving Nigeria's last-mile addressing problem), I'll share architectural decisions, graceful degradation patterns, working with NVIDIA NIM and LLM inference in production, and what it actually takes to go from idea to live product as a student in Africa. This perspective, building AI from the Global South with real constraints, offers something different from typical AI engineering talks.

From Prototype to Production: Building Real-World AI Systems
We’ve all seen cool AI demos—but turning them into real, reliable products is a different challenge. In this talk, I’ll walk through what it actually takes to build and ship AI features in production. We’ll look at how modern AI apps are structured (LLMs, RAG, APIs), and talk about common issues like latency, cost, and unpredictable outputs. I’ll also share practical patterns and lessons learned from real-world systems, so the audiences leave with a clearer idea of how to build AI that actually works at scale.

Building Real-World AI Applications with LLMs and RAG Systems
This session will focus on how to design and build production-ready AI systems using Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). I will walk through real-world use cases, including AI chatbots and voice assistants, covering system architecture, tools, and practical implementation strategies. A short live demo or workflow explanation will also be included to give participants hands-on insight into real industry applications.

Grover’s Algorithm: Quadratic Speedup for Unstructured Search
Grover’s algorithm is a fundamental quantum algorithm that provides a quadratic speedup for searching in unstructured databases. While classical algorithms require O(N) queries to locate a target item among N possibilities, Grover’s algorithm reduces this complexity to O(sqrt(N)) by exploiting quantum superposition and interference. The algorithm operates by iteratively applying two key operators: an oracle that marks the desired state by inverting its phase, and a diffusion operator that amplifies its probability amplitude. This process can be interpreted geometrically as a rotation in a two-dimensional subspace, gradually increasing the overlap with the target state. Grover’s algorithm is provably optimal for unstructured search problems and has important applications in cryptanalysis, optimization, and quantum information processing.

Machine learning fundamentals
Machine Learning Fundamentals is an introductory lecture that presents the core ideas and main paradigms of machine learning. It covers supervised, unsupervised, and reinforcement learning, explains how these approaches differ, and discusses typical tasks they are used for. The lecture also briefly introduces exploratory data analysis and basic data preparation as important steps that precede model building. The goal is to give learners a clear conceptual overview of how machine learning works and where different methods can be applied.

Building Effective Agentic AI Systems: Frameworks, Workflows and Lessons Learned
Enterprises are eager to adopt agentic AI but many run into the same challenges: unclear frameworks, messy integrations, misconceptions about system capabilities, and confusion around design choices. This session will share real lessons learned to help you avoid common pitfalls and build agentic systems that actually perform at scale. We’ll break down the essential decisions, single vs. multi-agent design, choosing the right framework, whether to use MCP, and how to integrate agents into existing workflows. You’ll walk away with a clear understanding of the challenges, the tradeoffs, and the practical steps required to build reliable agentic AI in the enterprise.

Basic libraries: Numpy, Pandas, Scikit-learn, Seaborn, matplotlib, sktime, skforecast
Unlock the power of modern data science by mastering Python's most essential analytical libraries. This lecture progresses from foundational data wrangling with NumPy and Pandas straight into compelling visual storytelling using Matplotlib and Seaborn. Building on those insights, the session explores core machine learning concepts via Scikit-learn before introducing sktime and skforecast to tackle the unique challenges of time series analysis.


Monitoring and Observability in Azure: Building Reliable Cloud Systems
This session explores how to design and implement effective monitoring and observability strategies in modern cloud environments using Microsoft Azure. The presentation will cover key Azure services such as Azure Monitor, Application Insights and Log Analytics, demonstrating how they work together to enable end-to-end visibility across applications and infrastructure. Real-world scenarios will illustrate how to detect anomalies, troubleshoot performance issues and build intelligent alerting systems. The session will also touch on integrating OpenTelemetry to enable vendor-neutral instrumentation and standardized telemetry collection across distributed systems. Participants would gain a practical understanding of how to move beyond basic metrics toward a more holistic observability approach that combines logs, metrics and traces to provide actionable insights into system behavior.

Data warehouse, ETL, Data Workflows
This lecture provides a comprehensive technical deep-dive into the lifecycle of data—from its origin in transactional systems to its final state in analytical environments. We will examine the architectural principles of modern data warehousing and the engineering required to move data reliably at scale.

Data Preparation for Machine Learning
In the world of Machine Learning, your model is only as good as the data you feed it. This session dives into why Data Preparation is the most critical (and time-consuming) phase of the ML lifecycle and how Databricks streamlines this chaos into a scalable, production-ready pipeline.

Generative AI fundamentals
In this speech, we will explore the fundamentals of generative models, delving into their core concepts and how they create new data, such as images, text, and audio. We’ll highlight their connection to cutting-edge technologies like AI and deep learning, showcasing practical applications across various fields. Expect insights into the mechanics, benefits, and transformative potential of generative models in shaping innovation.

The AI-Assisted SDLC: Patterns, Pitfalls, and Reality Checks
This session provides a pragmatic overview of AI-assisted software engineering, focusing strictly on how these tools integrate into each phase of the modern SDLC. Moving past the hype of greenfield demos, we will tackle the realities of applying LLMs to complex "grey-field" projects. We will explore the shift from basic human-in-the-loop workflows to autonomous agentic approaches, and how these trends return us to the fundamental practices and quality standards for software engineering that enable effective LLM usage.

Database basics, relational, non-relational, distributed databases
We will explore database fundamentals across relational, non-relational, and distributed architectures. Our primary focus will be a hands-on comparison between MongoDB and Firebase. You will learn to distinguish between a robust database built for complex querying and a full-featured real-time platform that integrates authentication, analytics, and data synchronization.

Serving LLM orchestration
ChatGPT is impressive. But a single prompt-response cycle can't book your flight, debug your codebase, or monitor your infrastructure. This lecture breaks down how LLMs are orchestrated into intelligent pipelines — and the engineering challenges of making them fast, reliable, and production-ready.

Amazon Web Services
This session provides a practical introduction to high-performance computing on Amazon Web Services. We will explore the key CPU-based compute options available in Amazon EC2, including powerful bare-metal (“metal”) instances, and explain how to organize them into cluster placement groups for low-latency, high-bandwidth performance. The session concludes with a brief overview of machine learning-optimized instances, including GPU-based options, and how they can accelerate modern data-driven applications.

NoSQL: Key-Value, Column-based, Document-based, Graph databases
The "Why" of NoSQL, the CAP theorem & PACELC, key-value stores (e.g., Redis, Riak), column-oriented databases (e.g., Cassandra, HBase), document-based databases, graph databases (e.g., Neo4j), decision matrix

A New Vision for Software Architecture: A Unified Architecture
This lecture explains how generative AI changes the way software architecture should be described and understood. It introduces a unified architecture metamodel as a common framework for connecting requirements, components, data, interactions, and development artifacts in AI-generated systems. The main idea is to make such systems more structured, understandable, and manageable. The presentation highlights key challenges, explains the role of the metamodel, and shows why it can improve consistency, transparency, and practical control in software engineering.
Generative AI for real-life applications
This lecture explores generative AI as a practical technology for real-life applications across customer service, software development, healthcare, marketing, logistics, and knowledge work. It highlights how organizations use generative AI for automation, personalization, content generation, decision support, and workflow acceleration, while also addressing challenges of reliability, governance, and integration.

Generative AI for creative applications
Using diffusion models, LLMs, and multimodal systems for creative work, and lessons from collaborating with artists and game studios.

From Quantum Circuits to Quantum Agents: Towards Scalable and Self-Programming Quantum AI
This talk presents a unified vision for advancing quantum machine learning from static variational models to scalable, adaptive, and ultimately self-programming quantum agents. Beginning with Variational Quantum Circuits (VQCs) as the fundamental computational primitives, we introduce several recent frameworks that extend the expressive and structural capacity of quantum models: Quantum Architecture Search (QAS) including evolutionary, RL and differentiable methods for learnable quantum circuit design, the Quantum Fast Weight Programmer (QFWP) for meta-level parameter generation, and the Quantum Train (QT) paradigm that couples quantum and classical networks for hybrid learning. These components form the basis for next-generation architectures such as Quantum LSTM (QLSTM) and Quantum Reinforcement Learning (QRL), enabling temporal modeling, decision-making, and adaptive behavior. I will also discuss their applications in climate forecasting, biomedical signal analysis, and communication networks, illustrating how quantum circuits can evolve toward more autonomous, agent-like intelligence. The talk concludes with future directions on distributed quantum learning and self-referential quantum AI systems.

Architecting Multi-Session Memory: Production-Grade State Management for LLM Agents
This technical session explores the architecture and implementation of persistent memory systems for AI agents. Participants will learn how to build hierarchical state management layers that allow agents to maintain coherent, long-term context across multiple sessions without suffering from "digital amnesia." Focusing on production-grade engineering, the talk covers context compression algorithms, scalable database schemas (using relational, caching, and vector layers), and robust privacy frameworks for GDPR compliance. Attendees will leave with a practical blueprint for deploying memory-enabled agents that can navigate complex, multi-session user interactions at enterprise scale.

Quantum Risk to AI Systems: What Every Engineer Should Understand About Harvest-Now, Decrypt-Later
Modern AI systems depend on cryptography that quantum computers will eventually break, and adversaries are not waiting. Encrypted traffic, model weights, training data, and credentials are being collected today on the assumption that future quantum capability will decrypt them retroactively. This is the harvest-now, decrypt-later threat, and it is already shaping how serious organizations design AI infrastructure.
This session walks engineers through the technical substance: why RSA and ECC fail under Shor's algorithm, what the NIST post-quantum cryptography standards (ML-KEM, ML-DSA, SLH-DSA) actually do, where the migration is hardest in AI pipelines (long-lived model artifacts, federated learning channels, inference APIs), and how cryptographic agility is becoming a baseline engineering requirement rather than a future concern.

What Is a Large Language Model and Why Does It Matter?
Every day, millions of people interact with AI tools like ChatGPT, Gemini, and Claude but very few understand what is actually happening under the hood. This session breaks down Large Language Models (LLMs) in plain, jargon-free language. We will explore how these models are trained, how they generate text, and why they sometimes get things wrong. No prior AI or coding experience is needed. By the end of this session, participants will have a clear mental model of what LLMs are, where they are being used across industries, and why understanding them is becoming an essential skill in today's world.

AI Agents for Beginners: Building Tool-Using Agents with MCP (Live Demo)
This session introduces AI agents in a beginner-friendly and practical way, and explores how they go beyond traditional chatbots by performing real tasks using tools. We will cover the core components of AI agents, including large language models, tool usage, and reasoning workflows. The session will also introduce the concept of Model Context Protocol (MCP) as a simple and scalable way for agents to interact with tools. A live demonstration will showcase an AI agent performing multi-step tasks, followed by a breakdown of how it works. By the end, participants will understand how to start building their own AI agents with modern approaches.

Neural networks & Deep learning architectures
This talk covers the main aspects of artificial neural networks and deep learning. We will discuss what artificial neural networks, deep learning, and deep neural networks are. Also, we will review the main stages of deep learning development and cover basic concepts of neural networks, including neurons, layers, learning, learning rate, activation functions, hyperparameters, parameters, etc. In the second part of the talk, we will review machine learning tasks that can be solved using artificial neural networks and deep learning, and analyse different types of deep neural networks and their architectures, such as multi-layer perceptron (MLP), convolutional neural networks (CNNs), autoencoders, transformers, recurrent neural networks (RNNs), long short-term memory (LSTM), gated recurrent unit (GRU), diffusion models, hybrid neural networks, etc.

Python for data analysis
We will explore core Python tools for data analysis, such as NumPy for numerical computations, SciPy for scientific methods, and Pandas for data manipulation. We will also learn about the key steps and concepts of exploratory data analysis. These tools will form the foundation for your data analysis workflow.

Introduction to Quantum Information and Quantum Computing
A quantum computer is a device that processes information according to the laws of quantum mechanics. At the heart of quantum programming is the use of quantum features (most notably superposition and entanglement) to obtain a computational advantage over classical methods. Measurement also plays a central role, shaping how information is encoded and extracted. For this reason, a strong foundation in quantum mechanics is indispensable for anyone aiming to work in quantum programming.

Version Control System basics: Git, Data Version Control (DVC), Data sources (Kaggle, etc.), ML Hubs (Hugging Face, etc.)
The basics of version control systems (VCS) will be considered, their role in maintaining the history of changes and organizing teamwork; special attention will be paid to Git as the most common tool, its basic commands, principles of working with branches and integration with the GitHub and GitLab platforms; it will also be explained how data versioning is carried out in machine learning projects using DVC and why it is important for the reproducibility of results; in addition, popular data sources, in particular Kaggle, will be considered, their opportunities for learning and practice; finally, ML hubs such as Hugging Face will be presented, their role in the dissemination of models and datasets, as well as their use to accelerate the development of solutions in the field of artificial intelligence.
