Affichage des articles dont le libellé est Artificial intelligence. Afficher tous les articles
Affichage des articles dont le libellé est Artificial intelligence. Afficher tous les articles

mercredi 19 novembre 2025

Agentic AI: Agentic AI and the future of work (Episode 4)

Agentic AI will accelerate automation, not just of simple, repetitive tasks, but of many complex workflows that once felt untouchable. That reality is both unsettling and full of opportunity. The question that matters is not whether change will happen, but how we steer it so people and organizations thrive.

Accepting a new scale of automation

It’s helpful to be frank: autonomous agents will take on a huge volume of repetitive work and many complex tasks previously seen as uniquely human. Some professions will change dramatically, others may shrink or disappear. That’s a hard truth and a moment to plan rather than panic.

Social safety and the case for shared security

Given the scale of transformation, ideas once considered radical like a universal basic income or income smoothing mechanisms are worth serious discussion. These are the kinds of social tools that can buy time for reskilling and reduce the human cost of rapid disruption.

New jobs may emerge

History shows us that technological revolutions destroy some jobs and create others. Agentic AI will spawn new roles: agent designers, orchestration engineers, AI ethicists, interaction designers for human–agent teams, and jobs we can’t yet name. The net effect depends on how we train and transition talent.

Why humans still matter: Innovation, Values and Empathy

Even when we tweak model temperature to drive creativity, we remain inside the box operating with constraints of data, assumptions, and design choices. Humans are essential for going truly out of the box: imagining new problems, reframing goals, and ideating radical directions that machines cannot originate on their own.

Beyond ideation, humans carry a bedrock of values. Empathy, cultural understanding, and moral judgment are the lenses through which we sense evolving customer needs and design services that matter. Those human qualities are not optional; they are the glue that makes technological capability humane and useful.

On layoffs, short-term gains, and long-term regret

Some companies may see productivity gains and respond by massively cutting headcount. That path risks long-term damage. An employee augmented with AI can reach far greater productivity than a replaced workforce. Companies that retrain and re-deploy staff can expand what they serve new markets, new product lines, deeper customer relationships instead of shrinking capacity.

In short: firing people to save costs today can destroy the very capability you need to grow tomorrow.

Lean, reimagined

Consider the Lean analogy: when some organizations use Lean to continually cut costs and offshore work, they can hollow out capabilities. By contrast, companies that truly embraced Lean principles like many Japanese manufacturers, invested in people, training, and continuous improvement, enabling them to successfully expand and even bring production back to new markets.

Agentic AI offers a similar fork in the road. If you only use it to do the same work with fewer people, you might win short-term savings. If you train your teams to master agentic tools, you multiply what your people can achieve: more products, broader services, faster learning.

Uncertainty is real but so is judgment

No one knows the absolute long-term truth about superintelligence or the full scope of disruption. That uncertainty calls for humility, not paralysis. My conviction is simple: place your bet on human potential. Invest in reskilling, build strong governance, and keep humans central to design and oversight.

Practical steps for leaders

  • Train first: Upskill teams on agentic tools rather than shrinking headcount immediately.
  • Redesign roles: Move people into higher-value jobs that use empathy, judgment, and creativity.
  • Adopt guardrails: Implement permission layers, audit logs, and human-in-the-loop checks.
  • Measure growth, not just cost: Track new revenue opportunities, products launched, and markets entered.
  • Engage stakeholders: Work with unions, communities, and policymakers on transition plans.

A positive, human-centered vision

Agentic AI will change work profoundly. The future we get depends on choices we make today. I believe the best path is one where companies empower people training them, entrusting them with higher-value tasks, and using AI to amplify human creativity and care.

If we build that future thoughtfully, we won’t merely replace effort with automation. We will expand what humans can imagine and build moving out of the box together.


Coming next: Episode 5 will explores how AI agents talk to each other: coordination, negotiation, shared memory, and the foundations of multi-agent intelligence.

mardi 18 novembre 2025

Agentic AI: Understanding the types of "AI Agents" (Episode 2)

Artificial agents didn’t appear fully formed. They evolved slowly, iteratively, and sometimes unexpectedly much like the early stages of human reasoning. Today’s Agentic AI systems, capable of coordinating multiple specialized agents to pursue complex goals collaboratively, are the result of decades of refinement.

If Episode 1 traced the shift from prediction to generation and onward to automation and autonomy, this episode dives into the building blocks of autonomous behavior, the different types of AI agents that form the foundation of today’s intelligent systems.

Each agent type represents a distinct way of “thinking” about the world, from reacting instantly to planning strategically.

1. Simple reflex agents, intelligence as instant reaction

The most primitive form of artificial intelligence. Reflex agents operate like a thermostat: see something → react immediately. They have no memory, no context, and no anticipation. Fast and predictable, but limited when situations become ambiguous or complex.

Strength: Extremely fast and predictable. Limitation: Easily confused by complexity.

2. Model-based agents, when perception meets memory

Model-based agents maintain an internal representation of the world. They remember recent events, infer hidden state, and update their internal model as new data arrives. This ability to hold a model of the environment enables better handling of partially observable situations.

Strength: Can reason about partial observability. Limitation: Still fairly reactive with limited long-term planning.

3. Goal-based agents, intelligence gains direction

Goal-based agents act with purpose. Instead of merely reacting, they evaluate actions by whether those actions bring them closer to a defined objective. These agents can plan, sequence tasks, and weigh alternative paths before acting.

Strength: Capable of planning and sequencing. Limitation: Goals are externally defined and typically not self-generated.

4. Utility-based agents, choosing the best action

Where goal-based agents ask “will this achieve the goal?”, utility-based agents ask “how well will this achieve the goal?” Utility introduces trade-offs, preferences, and optimization into decision-making allowing agents to balance multiple criteria and pick the best outcome.

Strength: Nuanced decision-making and optimization. Limitation: Designing robust utility functions can be difficult.

5. Learning agents, systems that improve themselves

Learning agents adapt from experience. Instead of relying solely on rules or fixed models, they update their strategies based on feedback and outcomes. This learning capability is central to modern agentic architectures that refine behavior continuously.

Strength: Self-improving and versatile. Limitation: Can be hard to control and may amplify biases if not carefully governed.

6. Multi-agent systems, when intelligence becomes collective

The most powerful and complex form: multiple specialized agents collaborate, communicate, and coordinate. Modern Agentic AI often composes orchestrators, planners, memory systems, and role-specific agents that together solve tasks no single agent could handle alone.

Strength: Scales to complex, multi-step problems. Limitation: Coordination, safety, and emergent behaviors become central challenges.

A clear trajectory

When we zoom out, the evolutionary path becomes clear:

  • Simple reflex → react instantly
  • Model-based → maintain a state
  • Goal-based → pursue objectives
  • Utility-based → optimize trade-offs
  • Learning agents → improve from experience
  • Multi-agent systems → collaborate and orchestrate

What started as simple reaction loops has grown into coordinated, memory-driven, goal-oriented networks capable of planning, learning, and cooperating in ways that echo human organizations. This evolution explains why Agentic AI is more than automation: it’s the emergence of structured, collaborative, adaptive intelligence.


Coming next: Episode 3 will explore how AI agents communicate with external tools and systems through the Model Context Protocol (MCP), a powerful standard that enables truly autonomous, tool-driven intelligence.

vendredi 14 novembre 2025

Agentic AI: The game changer already transforming how we work (Episode 1)

Artificial Intelligence has gone through several revolutions, and the next one is happening now.

We’ve shifted from prediction, where algorithms forecast outcomes, to generation, where models create text, images, and code. Now, we’re entering the age of automation and autonomy, where intelligent systems can plan, act, and learn on their own.

That’s the promise and power of Agentic AI.

If predictive AI focused on insight and generative AI focused on creativity, then Agentic AI emphasizes decision-making and action. It’s no longer just a tool that answers; it’s a collaborator that thinks.

The brain behind the agent

At the heart of every Agentic AI system is a Large Language Model (LLM) that acts as the brain. It interprets goals, reasons about context, and organizes the next best actions.

Other components act as the senses and hands. They collect data, carry out actions, and send results back to the model. Together, they create a closed cognitive loop, giving AI agents a sense of situational awareness.

The Agentic flow: perception, reasoning, action, learning

Agentic AI works through a continuous and adaptive cycle:

  • Perception : sensing and analyzing data from the environment.
  • Reasoning : the LLM evaluates objectives, plans steps, and makes decisions.
  • Action : the agent carries out those plans using digital or physical tools.
  • Learning : the system observes outcomes, adjusts strategies, and improves.

This flow transforms static AI into a living, evolving system capable of managing complex, changing environments.

The ecosystem powering Agentic AI

Building and coordinating autonomous agents is now possible thanks to a fast-growing set of tools:

  • LangChain : connects LLMs to APIs, data sources, and logic blocks, allowing for context-aware reasoning and dynamic tool use.
  • LangGraph : builds on LangChain with a graph-based structure that organizes agentic workflows, enabling loops, branching logic, and multi-agent coordination.
  • Zapier : connects agents to thousands of real-world applications, including email, Slack, spreadsheets, and CRM systems.
  • n8n : an open-source option for secure and customizable automation flows, giving developers full transparency and control.

These platforms create the infrastructure that lets the LLM “brain” interact smartly with its environment, perceiving, reasoning, and acting in real time.

Why It’s a game changer

We are already seeing the effects across various industries:

  • Manufacturing : predictive agents identify and fix issues before they disrupt production.
  • E-commerce : autonomous recommender agents create tailored experiences on the fly.
  • Energy : exploration agents optimize drilling operations and resource use.

Benchmarks show a leap: agentic frameworks can raise model performance from around 67% to over 90% on complex reasoning tasks.

That’s not evolution; it’s transformation.

A new era of intelligent collaboration

As these systems gain autonomy, responsibility and governance become crucial. Agentic AI should not replace human intelligence but enhance it, creating a new partnership between humans and digital minds.

What’s next ?

This post marks the start of a detailed exploration into the world of Agentic AI. In upcoming articles, we’ll cover:

  • The different levels of reasoning that make agents truly intelligent from reflexive reactions to strategic thinking.
  • How agents communicate with tools through the Model Context Protocol (MCP).
  • How agents collaborate with one another.

Each layer will show how autonomy, communication, and learning combine to shape the next generation of intelligent systems.

Here is a Link to a minimal example of how to use Mistral with Python and LangChain

So stay tuned; the era of Agentic AI is not on the way. It’s already here, changing how we create, decide, and act.

samedi 23 août 2025

Lean Six Sigma: The best ally for a successful agentic AI rollout?

Lean Six Sigma: The best ally for a successful agentic AI rollout?

AI is moving at lightning speed, and one of its most promising developments is agentic AI: systems that can plan and act autonomously, often across multiple steps. Exciting, right? But enthusiasm without discipline can be risky. Budgets explode, errors multiply, systems become fragile. This is where Lean Six Sigma (LSS) comes in.

What exactly is agentic AI?

Unlike traditional AI that simply responds to instructions, agentic AI acts like an autonomous actor. Imagine an assistant capable of reorganizing schedules, approving decisions, or optimizing complex workflows without immediate human intervention.

Impressive, but potentially dangerous if processes aren’t clearly defined. A single autonomous decision can create a domino effect of mistakes. Lean Six Sigma provides the structure needed to prevent this.

Lean Six Sigma: more than just approach

Lean Six Sigma is a mindset of continuous improvement and a toolbox full of techniques to map processes, eliminate waste, measure performance, and improve quality.

For illustration, we’ll reference DMAIC (Define, Measure, Analyze, Improve, Control), a widely used and easy-to-follow framework. But remember, Lean Six Sigma also includes Value Stream Mapping, SIPOC, Kaizen, 5S, poka-yoke, FMEA, control charts, and more. DMAIC is just one way to structure improvement within the broader LSS mindset.

Why managers love Lean Six Sigma

  • Clarity and structure: helps organize work and visualize impact.
  • Small changes, big effects: targeted improvements compound to create real value.
  • Applicable everywhere: industry, services, healthcare, construction... any organization has processes and variability.
  • Change management made easy: teams see tangible results, which encourages adoption.
  • Proven track record: countless successes in industrial, hospital, and financial projects.

How LSS supports agentic AI

So how does Lean Six Sigma help protect and optimize agentic AI projects? Let’s break it down:

1. Choosing the right problems

LSS ensures AI projects focus on measurable customer value, not flashy but low-impact initiatives.

2. Ensuring data quality

Good AI needs good data. Six Sigma tools identify inconsistencies and clean inputs before model training.

3. Reducing variability

Lean Six Sigma exposes inconsistent practices and process gaps, producing more reliable AI outputs.

4. Safe pilot design

Kaizen events and controlled pilots allow experimentation in a low-risk environment, with human-in-the-loop checks.

5. Risk and compliance management

FMEA and control plans anticipate agent failures and define safeguards before scaling up.

6. Driving adoption

Clear communication and visible wins build trust, ensuring the solution is used effectively.

7. Continuous monitoring and control

Dashboards, SOPs, and indicators detect drift or errors quickly, triggering corrective actions before they escalate.

8. Scaling what works

LSS encourages standardization and knowledge capture, turning successful pilots into repeatable, organization-wide practices.

To conclude

Before launching your next AI initiative, take time to map processes and apply the Lean Six Sigma mindset. The discipline you bring now will pay off many times over: safer systems, measurable gains, and lasting value.

mercredi 14 août 2024

The benefits of artificial intelligence in deploying the sharing economy



What is Sharing Economy ?

The sharing economy refers to a decentralized system where individuals share access to goods, services, and skills through online platforms, reshaping various industries. In the hospitality sector, platforms like Airbnb and Vrbo allow homeowners to rent out their properties to travelers, offering a flexible alternative to traditional hotels. In transportation, services like Uber, Lyft, and BlaBlaCar connect drivers with passengers, making commuting more convenient and affordable. The sharing economy also extends to goods, with platforms like Turo enabling car owners to rent out their vehicles, and ToolShare allowing people to borrow tools and equipment locally. In the workspace arena, WeWork offers shared office spaces for freelancers and small businesses, promoting a collaborative work environment. These examples illustrate how the sharing economy is creating more efficient, accessible, and flexible alternatives to traditional business models across various sectors. Beyond convenience, the sharing economy contributes to environmental sustainability by optimizing the use of resources, reducing waste, and lowering greenhouse gas emissions through shared consumption, ultimately preserving our ecosystem for future generations.

Resource optimization

Artificial intelligence plays a crucial role in optimizing resources within the sharing economy. AI algorithms analyze vast amounts of data to match supply with demand in real-time, ensuring that resources like vehicles, accommodations, and tools are used efficiently. This not only reduces waste but also maximizes the availability of shared resources, leading to cost savings and higher profitability for providers.

Enhancing user experience

AI enhances user experience by personalizing services in the sharing economy. Through machine learning, platforms can predict user preferences, offer tailored recommendations, and provide seamless interactions. For instance, AI-driven chatbots assist customers 24/7, ensuring that their needs are met promptly and effectively. This level of personalization leads to higher user satisfaction and loyalty.

Security and trust

Security and trust are paramount in the sharing economy, and AI significantly contributes to this aspect. AI systems are used to verify user identities, detect fraudulent activities, and ensure compliance with platform policies. By analyzing patterns and behaviors, AI can flag suspicious activities, protecting both users and providers. This builds trust in the platform, encouraging more people to participate in the sharing economy.

Innovation and new services

AI fosters innovation in the sharing economy by enabling the creation of new services and business models. With AI-driven analytics, platforms can identify emerging trends, predict future demands, and innovate accordingly. This adaptability allows platforms to offer new, relevant services that meet the evolving needs of users. As a result, AI not only supports current sharing economy models but also drives their evolution, ensuring continued growth and relevance.

mardi 13 août 2024

Hugging Face: your AI superpower for building AI apps

Hugging Face is like the superhero of the AI world, but instead of a cape, it’s armed with state-of-the-art natural language processing (NLP) models. Originally known for creating fun chatbot applications, Hugging Face has evolved into a powerhouse in the AI community. Today, it’s the go-to platform for developers and researchers working with machine learning models, especially those dealing with language data.

Why is Hugging Face interesting ?

In a world where AI is becoming essential, Hugging Face stands out for making advanced NLP accessible to everyone. It’s not just about providing powerful models; it’s about democratizing AI. Whether you’re a seasoned data scientist or someone just starting, Hugging Face makes it easy to integrate cutting-edge technology into your projects. With an ever-growing library of pre-trained models, you can save time and resources, jumping straight into building something impactful.

How can a business use Hugging Face to build a chatbot?

Imagine you’re running a business and want to enhance customer service with a chatbot. With Hugging Face, you don’t need a PhD in AI to get started. You can simply tap into their models to build a chatbot that understands and responds to customer inquiries naturally and effectively. For example, using the ‘transformers’ library from Hugging Face, you can fine-tune a pre-trained model to recognize the specific needs of your business. The result? A chatbot that’s not only smart but also tailored to your brand’s voice, boosting customer satisfaction and freeing up your human agents for more complex tasks.

The power of "spaces": spotlight on AI comic factory

Spaces on Hugging Face is where innovation meets creativity. It’s a platform that allows developers to host and share their AI-powered applications with ease. Take the AI Comic Factory as an example. This app harnesses the power of Hugging Face models to generate unique comic strips, blending the magic of AI with the art of storytelling. It’s not just a tool; it’s a playground for creators to push the boundaries of what’s possible with AI. For businesses, Spaces offers a way to deploy custom AI solutions without the hassle of managing infrastructure, making it easier than ever to turn ideas into reality. https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory

dimanche 11 août 2024

What is Deep Learning ?

Imagine teaching a child to recognize animals. You start by showing the child many pictures of different animals—dogs, cats, birds, etc.—and tell them what each one is. At first, the child might make mistakes, confusing a dog for a cat or a bird for a plane. But as you show them more and more examples, they start to get better at recognizing the animals on their own. Over time, they don’t just memorize pictures; they begin to understand what makes a dog a dog or a cat a cat. This process of learning from examples is similar to what happens in deep learning.

Deep learning is a subset of machine learning, which in turn is a branch of artificial intelligence that allows computers to learn and make decisions by themselves, much like how a child learns. Instead of being explicitly programmed with rules, deep learning models are fed large amounts of data, and they learn patterns and make predictions based on that data. It’s called “deep” learning because the model is made up of many layers, much like an onion. Each layer learns different aspects of the data, starting from simple shapes and colors to more complex concepts, like recognizing faces or understanding speech.

How Does It Work?

Let’s go back to the child learning animals. If the child was a deep learning model, each time you show a picture, it goes through many layers of understanding. The first layer might only recognize simple things like edges or colors. The next layer might recognize shapes, and another might start identifying specific features like ears or tails. Eventually, after going through all these layers, the model can confidently say, "This is a dog!" This layered approach allows deep learning models to understand very complex data, like images or speech, by breaking it down into simpler pieces.

Now, if you struggled with math as a child, feel free to skip this paragraph marked with *** and jump straight to the section titled "The Need for Training Data"

********************

Deep learning works by using artificial neural networks, which are computational models inspired by the structure and function of the human brain.

These networks consist of several key components:

Neurons (Nodes): The basic units of the network that process and transmit information. Each neuron receives inputs, performs a mathematical operation, and passes the result to the next layer.

Layers: The network is organized into layers:

Input Layer: The first layer that receives the raw data.

Hidden Layers: These are the intermediate layers where the actual computation happens. Deep learning networks have multiple hidden layers, allowing them to capture complex patterns in the data.

Output Layer: The final layer that produces the prediction or classification based on the learned patterns.

Weights: Each connection between neurons has a weight that determines the strength of the signal being passed. During training, the network adjusts these weights to minimize errors in its predictions.

Biases: Biases are additional parameters added to each neuron to help the model better fit the data. They allow the network to shift the activation function, making it more flexible.

Activation Functions: These functions decide whether a neuron should be activated or not by applying a transformation to the input signal. Common activation functions include ReLU (Rectified Linear Unit), Sigmoid, and Tanh. They introduce non-linearity into the network, enabling it to model complex relationships.

Loss Function: The loss function measures how far the network’s predictions are from the actual targets. The goal of training is to minimize this loss, making the model more accurate.

Backpropagation: During training, the network uses backpropagation to update the weights and biases based on the error calculated by the loss function. This process involves calculating the gradient of the loss function with respect to each weight and bias, and then adjusting them in the direction that reduces the error.

Optimization Algorithm: This algorithm, such as Stochastic Gradient Descent (SGD) or Adam, is used to adjust the weights and biases during backpropagation to minimize the loss.

When data is fed into the network, it passes through these components layer by layer. Initially, the network may make errors in its predictions, but as it continues to process more data and adjusts its weights and biases, it learns to make increasingly accurate predictions. This ability to learn from large amounts of data and capture intricate patterns is what makes deep learning so powerful in tasks like image recognition, natural language processing, and more.

********************

The Need for Training Data

Just like the child needs to see many pictures to learn, a deep learning model needs a lot of data to become good at what it does. The more examples it sees, the better it becomes at making predictions. If you only show a few pictures, the child—or the model—might not learn well and could make a lot of mistakes. But with enough diverse and accurate examples, the model learns to generalize, meaning it can recognize things it’s never seen before.

Why is Deep Learning So Effective?

Deep learning has become incredibly effective because of its ability to learn from vast amounts of data and make sense of it in ways that are often better than humans. For example, deep learning models can now recognize faces in photos, translate languages, and even drive cars! These models have achieved breakthroughs in areas like healthcare, where they can help doctors detect diseases from medical images, or in entertainment, where they power recommendation systems on platforms like YouTube.

Advancements Through Deep Learning

The advancements made through deep learning are staggering. Things that were once thought to be science fiction, like talking to a virtual assistant (think Siri or Alexa), are now part of everyday life. In many cases, these deep learning models outperform traditional computer programs because they can adapt and improve as they’re exposed to more data. This adaptability makes them powerful tools in our increasingly data-driven world.

Last but not least

One of the most revolutionary advancements in deep learning is the development of a type of architecture called transformers. Transformers are particularly powerful because they can process and understand data in parallel, making them incredibly efficient at handling large and complex datasets. This architecture is the backbone of large language models (LLMs) on which the well-known ChatGPT is based. Transformers enable these models to understand and generate human-like text by analyzing vast amounts of information and learning patterns in language. This is why ChatGPT can hold conversations, answer questions, and even write essays, all thanks to the power of transformers in deep learning.

From quotes of wisdom

From quotes of wisdom