A New Kind of Engineering

Johanna Appel · Published in Towards Data Science · 4 min read
.
April 15, 2023
Image by author based on photo by ALAN DE LA CRUZ on Unsplash

As of writing this (April 2023), frameworks such as langchain [1] are pioneering more and more complex use-cases for LLMs. Recently, software agents augmented with LLM-based reasoning capabilities have started the race towards a human-level of machine intelligence.

What are we talking about?

Agents are a pattern in software systems; they are algorithms that can make decisions and interact relatively autonomously with their environment. In the case of langchain agents, the environment is usually the text-in/text-out based interfaces to the internet, the user or other agents and tools.

Running with this concept, other projects [2,3] have started working on more general problem solvers (some sort of ‘micro’ artificial general intelligence, or AGI — an AI system that approaches human-level reasoning capabilities). Although the current incarnation of these systems are still quite monolithic in that they come as one piece of software that takes goals/tasks/ideas as input, it is easy to see in their execution that they are relying on multiple distinct sub-systems under the hood.

The new paradigm we see with these systems is that they model thought processes: “think critically and examine your results”, “consult several sources”, “reflect on the quality of your solution”, “debug it using external tooling”, … these are close to how a human would think as well.

Now, in every day (human) life, we hire experts to do jobs that require a specific expertise. And my prediction is that in the near future, we will hire some sort of cognitive engineers to model AGI thought processes, probably by building specific multi-agent systems, to solve specific tasks with a better quality.

Why would I assume this? Why are monolithic AGIs not necessary good enough?

From how we work with LLMs already today, we are already doing this — modelling cognitive processes. We do this in specific ways, using prompt engineering and lots of results from adjacent fields of research, to achieve a required output quality. Even though what I described above might seem futuristic, this is already the status quo.

Where do we go from here? We will probably see ever smarter AI systems that might even surpass human-level at some point. And as they get ever smarter, it will get ever harder to align them with our goals — with what we want them to do. AGI alignment and the security concerns with over-powerful unaligned AIs is already a really active field of research, and the stakes are high — as explained in detail e.g. by Eliezer Yudkowski [4].

My hunch is that smaller i.e. ‘dumber’ systems are easier to align, and will therefore deliver a certain result with a certain quality with a higher probability. And these systems are precisely what we can build using the cognitive engineering approach.

What we should be doing

  • We should get a good experimental understanding of how to build specialized AGI systems
  • From this experience we should create and iterate the right abstractions to better enable the modelling of these systems
  • With the abstractions in place, we can start creating re-usable building blocks of thought, just like we use re-usable building blocks to create user interfaces
  • In the nearer future we will understand patterns and best practices of modelling these intelligent systems, and with that experience will come understanding of which architectures can lead to which outcomes

As a positive side effect, through this work and experience gain, it may be possible to learn how to better align smarter AGIs as well.

Where this will lead

I expect to see a merge of knowledge from different disciplines into this emerging field soon.
Research from multi-agent systems and how to use them for problem-solving, as well as insights from psychology, business management and process modelling all can be beneficially be integrated into this new paradigm and into the emerging abstractions.

We will also need to think about how these systems can best be interacted with. E.g. human feedback loops, or at least regular evaluation points along the process, can help to achieve better results — you may know this personally from working with ChatGPT.
This is a UX pattern previously unseen, where the computer becomes more like a co-worker or co-pilot that does the heavy lifting of low-level research, formulation, brainstorming, automation or reasoning tasks.

About the author

Johanna Appel is co-founder of the machine-intelligence consulting company Altura.ai GmbH, based in Zurich, Switzerland.

She helps companies to profit from these ‘micro’ AGI systems by integrating them into their existing business processes.

References

[1] Langchain GitHub Repository, https://github.com/hwchase17/langchain

[2] AutoGPT GitHub Repository, https://github.com/Significant-Gravitas/Auto-GPT

[3] BabyAGI GitHub Repository, https://github.com/yoheinakajima/babyagi

[4] “Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization”, Lex Fridman Podcast #368, https://www.youtube.com/watch?v=AaTRHFaaPG8

Johannes Hollmann

CEO/Gründer

Sie planen ein KI-Projekt?

Lassen Sie uns besprechen, wie Ihre Daten in Kombination mit Technologien für maschinelles Lernen die Leistung Ihres Unternehmens steigern können.

Nehmen Sie Kontakt auf!