As matosinhos.tech starts it’s new tech season, we want to take a moment to share what we have in store for 2025.
New year, new MT?
Nah, same old, just better.
At you know, at its core, our community is all about learning with each other, exploring the Portuguese tech ecosystem, and delivering exceptional content.
That's why in 2025 we'll continue sharing our member's video insights through our smallBytes, taking you to incredible venues for our events (you might need sunscreen this year ☀️), creating workshops led by top experts, and providing you with inspiring articles.
And that's where we want to start.
We are thrilled to introduce our newest MT newsletter contributors:
and .
Pedro will keep you updated on the latest tech events worldwide and spotlight innovative startups in the Portuguese tech scene. Meanwhile, the talented Ana Martins will help demystify AI, providing you with a clearer understanding of how it works and how we can leverage it in our lives and careers.
So, we hope you enjoy what we have planned for 2025, and, together, let’s make this the best season eveeer!
The Emotional Robots’ take over at CES 2025
by
The annual Consumer Electronics Show (CES) has always been a window into the future, and the 2025 edition in Las Vegas was no exception. From AI-driven gadgets to autonomous vehicles, CES 2025 showcased many innovations, each one promising to transform our daily lives.
The deeper I looked into it, the clearer it became how companies are expanding their horizons. In robotics, for example, technology is no longer confined to industrial functions or service tasks, as robots are now entering our personal lives and offering emotional support.
Personally dealing with anxiety at times, I found these developments fascinating, especially because they show how technology and robots can complement human connections. Among them, three stood out for their unique approaches.
Jennie and ROPET

Jennie, launched by Tombot and crafted by Jim Henson’s Creature Shop, carries a lifelike design paired with truly inventive features. It not only reacts to touch and responds to voice commands but also includes an integrated app for further customization, allowing users to tailor its behavior to suit their needs.
Similarly, ROPET, developed by the Hong Kong-based startup Ropet Intelligent Technology Co., Ltd., combines advanced AI with customizable attributes to deliver a highly personalized and engaging interaction.
While I can’t imagine giving up the unique quirks and responsibilities of owning a living pet, I understand the appeal of both Jennie and ROPET. Their lack of need for feeding or vet visits makes them a practical solution for individuals with busy or restricted lifestyles. Most importantly, they’re beneficial for those they’re designed to help—people coping with loneliness, anxiety, or other psychological challenges—by providing a soothing companion without the traditional upkeep.
Romi
Whereas Jennie and ROPET aim to mimic the feeling of animal care, Romi by Mixi Inc. takes a different approach. Romi’s conversational AI isn’t just impressive; it’s intriguing. With over 150 facial expressions and the ability to recall past conversations and adapt to our mood, it feels like it genuinely "gets you". But I wonder: does it truly feel empathetic, or is it just really good at faking it? As someone who values genuinely profound communication, I’m curious if Romi could ever bridge that gap and be there for those who might need a more meaningful chat.
What to expect
That’s why despite the impressive strides these handy devices represent, it’s essential to acknowledge their limitations. Even though they provide some kind of relief, reproducing emotional responses doesn’t cover the same spontaneity, unpredictability, and warmth of a real interaction or the impact of professional assistance.
However, I still agree Jennie, Romi, and ROPET (and their CES 2025 counterparts) are more than gadgets. Although they may never replace the richness of human relationships, they remind us of how robotics can go beyond efficiency to enhance the aid systems we rely on. For me, this marks an exciting step toward a future where technology is designed not just to function but to feel.
What do you know about agents?
by

A lot of my conversations these days revolve around AI and agents. But it wasn't until recently that I started asking people what they thought agents were. What they said genuinely surprised me and inspired me to try to write this post attempting to clarify agent-related misconceptions.
Many skilled technical people are under the impression that building agents is only available to big companies that have the resources to create these digital beings that can make decisions and navigate the digital world unattended. I hope to clarify that this isn't true and that anyone can build an agent.
A good friend of mine, with extensive experience in HR departments of tech companies, thought that agent was simply a term for specialised chatbots, such as a customer service bot within a company. While this is not entirely wrong, it is just a small part of the bigger picture.
I couldn't stop thinking about this. How many other people think this way too?
Some of the misconceptions, I believe, come from a mismatch between theory - how researchers think about and define AI agents - and practice - what specific agents AI companies release with today's technology and what they advertise through their marketing efforts.
There are many posts about agents out there. I want you to think of this one as lesson 0 of what agents are and how they work in practice. Let’s dive in.
AI Agents
To understand agents in practice, it is necessary to have a basic concept of how Large Language Models (LLMs) work. If you have had conversations with ChatGPT or Claude, you will intuitively understand how LLMs work: you send some text as an instruction or question (we will call this a "prompt") and the LLM generates some text in return. With this in mind:
ChatGPT is a chat interface, GPT-4 is one of the LLMs that power it.
Claude is the chat interface, Claude 3.5 Sonnet is one of the LLMs that power it.
Prompt Engineering1 is the art of crafting and refining prompts to improve the performance of LLMs. For example, you might have started your prompts with You are a world-class software engineer or Explain this concept to a 10 year old. (example below)
Believe it or not, this is all the technology you need to build an agent.
On to a simple theoretical definition: AI Agents or Agentic Systems (I have been calling them agents, for short) have access to the real world through extended capabilities, external tools or services and can work towards their goals with minimal supervision. It's important to note that this theoretical definition existed long before we had technology to bring agents to life.
Equally, I believe that in the future we will be able to build agents with other technologies we don't yet know today.
While Claude and ChatGPT are generally not considered agents, they still exhibit some attributes of agency. For example, ChatGPT can search the web in real time (access to the real world) or execute Python code to perform calculations (external tools).
These interfaces still only act through a standard question/answer mechanism, and they can't make any changes in the real world.
Being an agent is not a binary Yes or No, it’s more of a sliding scale of capabilities.
It might be more accurate, if not easier, to look at an agent by their degrees of agency or autonomy. For example, Claude or ChatGPT is considered a system with relatively low degrees of agency, since they only aid human decision-making or produce outputs without acting in the world. A system with a high degree of agency could be a customer service agent that could autonomously respond to customer inquiries, process refund requests and escalate complex issues to human representatives. This agent would have access to the customer database and could analyse patterns in interactions to identify recurring issues.
Building agency into an LLM
Now we’ve defined what an agent is, let’s look at how you can build your own.
To start, I created a simple question/answer chatbot that the more technical of you can execute in your terminal using Anthropic's API (and a Claude LLM of your choice)2.
Systems like Claude and ChatGPT differentiate between two types of prompts:
System prompt - the very first message sent to the LLM, hidden from the user.
User prompt - all of the questions, tasks or instructions the user asks the LLM during the course of the conversation.
Since we are now building our own agent, the only prompt we can control is the system prompt. In my chatbot example below, the system prompt used is simply:
You are a world-class AI Assistant.

Agent = Thinking + Acting
So how can we build on this and give the LLM even more agency? Interestingly, we can elicit thinking in LLMs by way of some clever prompt engineering3. One common example is adding Think through this problem step by step.
Similarly, we can use Prompt Engineering to get the LLM to make decisions and act in the real world. One such agent is the ReAct4 agent I built below, which is designed to both Reason and Act - because our agent will think about what to do and then take action when needed.
Think of building an agent like creating a helpful assistant who can use specific tools. I'll explain this using a simple calculator example. But before we dive in, here's how it works, step by step:
System prompt: First, we tell the LLM what tools it can use and to think about what to do. Just like you might tell an assistant Here's the calculator on your desk if you need it.
LLM response to user prompt: When we ask the LLM a question, it follows the reasoning outlined in the system prompt:
Thinks: Do I need a calculator for this?
If yes, it says something like: I should use the calculator to solve 3 * 4
If no, it just answers directly
Behind the scenes, our code:
Catches when the LLM wants to use the calculator
Takes the math problem (like 3 * 4)
Calculates the result
Sends the answer back to the LLM
So I've modified the question/answer chatbot to become a ReAct agent that can use a calculator5.
Most basic examples of agents use the calculator example because LLMs are not naturally good at math. LLMs work by predicting the most likely next token. While most of the simplest calculations would be in their training data, they will get more complicated calculations wrong.
One such complicated calculation is 567890*34567899876
(the result is 19630764660581640
). Let's use it to compare our ReAct agent vs the simple chatbot:


It’s important to note that acting, in this context, doesn't mean that the agent can suddenly decide to create a file on your computer or search the web. Instead, it's like a coordinator who can request specific actions that we've pre-approved and built into the system.
The following is the system prompt used for the ReAct agent6:
You run in a loop of Thought, Action, PAUSE, Observation.
At the end of the loop you output an Answer
Use Thought to describe your thoughts about the question you have been asked.
OPTIONAL - Use Action to run one of the actions available to you - then return PAUSE.
OPTIONAL - Observation will be the result of running those actions.
... (this Thought/Action/Observation can repeat N times)
Your available actions are:
calculate:
e.g. calculate: 4 * 7 / 3
Runs a calculation and returns the number - uses Python so be sure to use floating point syntax if necessary
Example session:
Question: 87 * 91
Thought: I should use calculate
Action: calculate: 87*91
PAUSE
You will be called again with this:
Observation: The result of the calculation is 7917.
You then output: 7917.
You might be thinking, if we can't call Claude/ChatGPT an agent, why should we call this an agent?
Yes, an agent that can only use a calculator seems pretty basic. However, this simple pattern of thinking and acting is actually the foundation for building much more capable agents. Using these same principles, you could create agents that organize files on your computer, research and write reports by searching the web, or even handle complex customer service tasks like the example we discussed earlier. The calculator is just the beginning – it's the pattern that matters.
Why would we need agent frameworks?
All frameworks out there also use these simple principles as base for their systems.
What they do is abstract common design patterns to make it easier to build more complex applications such as:
Tool use7: Extending the LLM capabilities by building tools for search, code execution, calculation, etc. Both GPT-4 and Claude's API allow you to define tools without having to explicitly include it in the prompt. I left this out on purpose because I consider it an engineering utility rather than a property of the LLM.
Conversational patterns8: We might want to create multiple agents to collaborate in solving a task. For example in building a system to code a software feature, we might want to break down the feature into subtasks to be executed by different roles — such as a product manager, software engineer, QA and so on — and have different agents accomplish different subtasks.
AutoGen, Crew AI, and LangChain are popular agent frameworks that simplify some of these patterns.
What's next
Hopefully by now you have a better understanding of agents and how you could build one yourself.
I will leave you with further reading and some suggestions of practical exercises if you want to experiment building your own agent.
Further reading
If you are looking for a comprehensive explanation of agents, including more design patterns read this: https://huyenchip.com/2025/01/07/agents.html
More agents design patterns from Anthropic here: https://www.anthropic.com/research/building-effective-agents
Safety concerns and best practices for building agents from OpenAI: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf
My own article highlighting potential harms for a scientific agent: https://substack.com/home/post/p-149483896
More on Prompt Engineering: https://learnprompting.org/docs/basics/introduction
Build your own agent
The ReAct agent is only one type of agent. Build your own agent by modifying the system prompt and using Anthropic built-in tool use functionality.
Find a short agent course on DeepLearning AI.
Prompt Engineering https://learnprompting.org/docs/basics/prompt_engineering
Find code here. Follow initial setup guide to get anthropic api key
How do LLMs think? https://substack.com/home/post/p-15228602
ReAct: Synergizing Reasoning and Acting in Language Models https://react-lm.github.io/
Find code here. Follow initial setup guide to get anthropic api key
System prompt based on https://til.simonwillison.net/llms/python-react-pattern
Agentic Design Patterns Part 3, Tool Use https://www.deeplearning.ai/the-batch/agentic-design-patterns-part-3-tool-use
Agentic Design Patterns Part 5, Multi-Agent Collaboration https://www.deeplearning.ai/the-batch/agentic-design-patterns-part-5-multi-agent-collaboration/