Updates to RobotFreedom.AI

Since our last update at Maker Faire, we’ve made significant improvements to our robot lineup, focusing on increasing autonomy, human interactions and emotional expression. Core to this is our AI framework, RobotFreedom.AI now available on GitHub.

The design principle for our autonomous AIs ():

  1. Runs completely locally on a RaspberryPi (no third-party API call out)
  2. Has distinctive personalities based on the big 5 personality traits
  3. Learns from interactions (initiated when in sleep mode)
  4. Makes informed decisions based on a knowledge graph
  5. Can carry on a conversation between themselves and with a human

To achieve this we used an Agentic AI framework rather than just tapping direction into a chat bot. By having a core AI that was designed to meet our specifics goals we had more control of its behavior and could also directly see how the AI worked in real-time which provide to be a great educational tool.

One of the key upgrades is the addition of machine learning algorithms, which enable the robots to learn from their interactions and adapt quickly adapt new situations. This allows them to become even more autonomous in their decision-making processes, making them more efficient and effective in completing tasks.

We’ve also made notable strides in expanding the robots’ interactive capabilities, incorporating features such as voice recognition, gesture control, and tactile feedback. These enhancements enable users to engage with the robots on a deeper level, fostering a more immersive and engaging experience.

Some of the specific updates include:

* Advanced sensor arrays for improved navigation and obstacle detection

* Enhanced machine learning algorithms for adaptive decision-making

* Voice recognition and speech-to-text capabilities

* Tactile feedback mechanisms for haptic interaction

These updates have significantly enhanced the robots’ autonomy, interactivity, and overall user experience. We’re excited to continue refining our designs and pushing the boundaries of what’s possible with robotics and AI.

We have been busy working on our next release of our robot software platform

Major features:

  • Robots can coordinate actions using web socket communication
  • Dedicated http server for each robot
  • Added Piper TTS for voice
  • Added Vosk for speech recognition
  • Added Ollama and LangChain for chat bot.
  • Improved random movement generator.
  • Tons of bug fixes
  • Improved debug mode
  • Low memory mode for RaspberryPis 1-3

Tested on OsX and RaspberryPi 1-5.

You can see our Robotic AI platform in action here.

Happy creating!