Matthew Ikle, Chief Science Officer at SingularityNet – Interview Series

Matthew Ikle is the  Chief Science Officer at SingularityNET, a company founded with the mission of creating a decentralized, democratic, inclusive and beneficial Artificial General Intelligence. An ‘AGI’ that is not dependent on any central entity, that is open for anyone and not restricted to the narrow goals of a single corporation or even a single country.

SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts and entertainment.

Given your extensive experience and role at SingularityNET, how confident are you that we will achieve AGI by 2029 or sooner, as predicted by Dr. Ben Goertzel?

I am going to answer this question in a bit of a roundabout way. 2029 is roughly five years from now. Many years ago (early-mid 2010s), I was extremely optimistic about AGI progress. My optimism at the time was founded on the level of detailed thought and convergence of ideas I witnessed in AGI research at the time. While most of the big ideas from that era, I believe, still hold promise, the difficulty, as is often the case, comes from fleshing out the details of such broad-stroke visions.

With that caveat in mind, there is now a plethora of new information, from numerous disciplines – neuroscience, mathematics, computer science, psychology, sociology, you name it – that provides not just the mechanisms for finishing those details, but also conceptually supports the foundations of that earlier work. I am seeing patterns, and in quite divergent fields, that all seem to me to be converging at an accelerating rate toward analogous sorts of behaviors. In many ways, this convergence reminds me of the period of time prior to the release of the first iPhone. To paraphrase Greg Meredith, who is working on our RhoLang infrastructure for safe concurrent processing, the patterns I see these days are related to origin stories – how did the first life/cell begin on earth? How and when did mind form? And related questions regarding phase transitions for example.

For example, there is quite a bit of new experimental research that tends to support the ideas underlying a complex dynamical systems viewpoint. EEG patterns of human subjects, for example, display remarkable behavior in alignment with such system dynamics. These results harken back to some much earlier work in consciousness theories. Now there appears to be the beginnings of experimental backup for those theoretical ideas.

At SingularityNET, I am thinking a lot about the self-similar structures that generate such dynamics. This is quite different, I would argue, than what is happening in much of the DNN/GPT community, though there is certainly recognition among certain more fundamental researchers of those ideas. I would point to the paper “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” released by 19 researchers in August of 2023, for example. The researchers spanned a variety of disciplines including consciousness studies, AI safety research, brain science, mathematics, computer science, psychology, neuroscience and neuroimaging, and mind and cognition research. What those researchers have in common is bigger than a simple quest for the next incremental architectural improvement in DNNs, but instead they are focused on scientifically understanding the big philosophical ideas underpinning human cognition and how to bring them to bear to implement real AGI systems.

What do you see as the biggest technological or philosophical hurdles to achieving AGI within this decade?

Understanding and answering big philosophical and scientific questions including:

  • What is life? We may think the answer is clear, but biological definitions have proven problematic. Are viruses “alive” for example.
  • What is mind?
  • What is intelligence?
  • How did life emerge from a few base chemicals in specific environmental conditions? How could we replicate this?
  • How did the first “mind” emerge? What ingredients and conditions enabled this?
  • How do we implement what we learn when investigating the above five questions?
  • Is our current technology up to the task of implementing our solutions? If not, what do we need to invent and develop?
  • How much time and personnel do we need to implement our solutions?

SingularityNET views neuro-symbolic AI as a promising solution to overcome the current limitations of generative AI. Could you explain what neuro-symbolic AI is and how SingularityNET plans to leverage this approach to accelerate the development of AGI?

Historically, there have been two main camps of AGI researchers, along with a third camp blending the ideas of the other two. There have been researchers who believe solely in a sub-symbolic approach. These days, this primarily means using deep neural networks (DNNs) such as Transformer models including the current crop of large language models (LLMs). Due to the use of artificial neural networks, sub-symbolic approaches are also called neural methods. In sub-symbolic systems processing is run across identical and unlabeled nodes (neurons) and links (synapses). Symbolic proponents use higher-order logic and symbolic reasoning, in which nodes and links are labeled with conceptual and semantic meaning. SingularityNET follows a third approach which would be most accurately described as a neuro-symbolic hybrid, leveraging the strengths of symbolic and sub-symbolic methods.

Yet it is a specific sort of hybrid largely based on Ben Goertzels’ patternist philosophy of mind and detailed in, among many other documents, his screed “The General Theory of General Intelligence: A Pragmatic Patternist Perspective”.

While much of current DNN and LLM research is based upon simplistic neural models and algorithms, the use of mammoth datasets (e.g. the entire internet), and correct settings of billions of parameters in the hopes of achieving AGI, SingularityNET’s PRIMUS strategy is based upon foundational understandings of dynamic processes at multiple spatio-temporal scales and how best to align such processes to prompt desired properties to emerge at different scales. Such understandings enable us to proceed to guide AGI research and development in a human understandable manner.

What frameworks do you believe are critical to ensure that AGI development benefits all of humanity? How can decentralized AI platforms like SingularityNET promote a more equitable and transparent process compared to centralized AI models?

All kinds of ideas here:

Transparency — While nothing is perfect, ensuring complete transparency of the decision-making process can help everyone involved (researchers, developers, users, and non-users alike) align, guide, understand, and better handle AGI development for the benefit of humanity. This is similar to the problem of bias which I will touch on below.

Decentralization – While decentralization can be messy, it can help ensure that power is shared more broadly. It is not, in itself, a panacea, but a tool that, if used correctly, can help create more equitable processes and results.

Consensus-based decision-making – decentralization and consensus-based decision making can work together in the pursuit of more equitable processes and results. Again, they don’t always guarantee equity. There are also complexities that need to be addressed here in terms of reputation and areas of expertise. For example, how can we best balance conflicting desired characteristics? I view transparency, decentralization, and consensus-based decision-making, as just three critically important tools that can be used to guide AGI development for the benefit of humanity.

Spatiotemporal alignment of emergent phenomena across multiple scales from the extraordinarily small to the inordinately large. In developing AGI, I believe it is important to not just rely on a single “black-box” approach in which one hopes to get everything correct at the outset. Instead, I believe designing AGI with fundamental understandings at various development stages and at multiple scales can not only make it more likely to achieve AGI, but more importantly to guide such development in alignment with human values.

SingularityNET is a decentralized AI platform. How do you envision the intersection of blockchain technology and AGI evolving, particularly regarding security, governance, and decentralized control?

Blockchain certainly has a role to play in AI control, security, and governance. One of blockchain’s biggest strengths is its ability to foster transparency. The question of bias is a great example of this. I would argue that every person and every dataset is biased. I have my own personal biases, for example, when it comes to what I believe is required to achieve truly safe, beneficial, and benevolent AGI. These biases were forged by my studies and background and they guide my own work.

At the same time, I try to be completely open to ideas that conflict with my biases and am willing to adjust my biases based upon new evidence. Regardless, I try my best to be open and transparent with respect to my biases, and to then condition my ideas and decisions based upon a self-reflective understanding of those biases. It is tricky, it is difficult but, I believe, better than not acknowledging one’s own biases. By its nature, blockchain allows for better and transparent tracking, tracing, and verification of processes and events. In a similar manner as I described previously, transparency is a necessary, but not always sufficient, component for security, governance, and decentralized control.

How blockchain and AGI co-evolve is an interesting question. In order that the two technologies interact toward a positive singularity, it seems clear that the fundamental characteristics I keep pointing at (transparency, decentralization, consensus, and values alignment), are central and critical and must be kept in mind at all stages of their co-evolution.

As a leader who has been closely involved in both AI and blockchain, what do you believe are the most important factors for fostering collaboration between these two fields, and how can that drive innovation in AGI?

I come from the AI/AGI side of that pair. As is often the case when integrating cross-disciplinary ideas, much comes down to matters of language and communication. All groups need to listen to each other in order to better understand how the technologies can help one another. In my job at SingularityNET, this has been a constant struggle. High-end researchers, which it would be an understatement to say that SingularityNET has in abundance, often have clear mental conceptions of big ideas. When working across disciplinary boundaries, the difficult part is realizing that not everyone is “in your head”. What one takes for granted, will not be so clearly observed from those in other fields. Even words used in common can be used differently across different fields of study. There was a recent case in our BioAI work, in which biologists were using a mathematical term, but not entirely correctly in terms of its mathematical definition. Once those sorts of situations are clearly understood, the team can move forward with common purpose so that the integration truly proves the whole greater than the sum of its parts.

How do you see the AI and blockchain industries working towards greater diversity and inclusion, and what role does SingularityNET play in promoting these values?

AI and blockchain can both play major roles in improving diversification and inclusion efforts. Although I believe it is impossible to remove all bias – many biases form simply through life experiences – one can be open and transparent about one’s biases. This is something I actively strive to do in my own work which is biased by my academic background so that I see problems through a lens of complex system dynamics. Yet I still strive to be open to and understand ideas and analogies from other perspectives. AI can be harnessed to aid in this self-reflection process, and blockchain can certainly aid with transparency. SingularityNET can play a huge role by hosting tools for detecting, measuring, and removing, as much as is possible, biases in datasets.

How does SingularityNET’s work in decentralized AI ecosystems contribute to solving global challenges such as sustainability, education, and job creation, especially in regions like Africa, where you have a special interest?

 Sustainability:

  • Applying AI and system models to solve complex ecosystem problems at massive scale.
  • Monitoring such solutions at scale.
  • Using blockchain to track, trace, and verify such solutions.
  • Using a combination of AI, ecosystem models, hyper-local data, and blockchain, we have ideated complete solutions to artisanal mining in Africa, and agricultural carbon sequestration at scale.

Education:

As a former tenured full professor of mathematics and computer science, education is extremely important to me, especially as it provides opportunities to underserved student populations. It is important to:

  • Enhance accessibility by developing hybrid courses to reach students who may face geographical, financial, or time constraints.
  • Promote diversity and Inclusion by Increasing the participation of underserved populations in AI, blockchain, and other advanced technologies.
  • Foster interdisciplinary knowledge by creatin courses that bridge academic and professional fields.
  • Support career advancement by providing skills and certifications that are directly applicable to the job market.

I view both AGI and blockchain, and their synergies, as playing critical roles addressing the above objectives within “apprenticeship to mastery” style programs centered upon hands-on project-based learning.

Job Creation:

By fostering the four educational objectives above, it seems to me AGI, blockchain, and other advanced technologies, coupled with positive collaborations among teachers and learners, could encourage and spawn entire new technologies and businesses.

As someone committed to achieving a positive singularity, what specific milestones or breakthroughs in AI technology do you believe will be necessary to ensure that AGI develops in a beneficial way for society?

  • Ability to align emergent phenomena in human interpretable manners across multiple spatiotemporal scales.
  • Ability to understand at a deeper level the concepts underlying “spontaneous” phase transitions.
  • Ability to overcome multiple hard problems at a fine detail to enable true multi-processing through state superpositions.
  • Transparency at all stages.
  • Decentralized decision-making based upon consensus building.

Thank you for the great interview, readers who wish to learn more should visit SingularityNET.

The post Matthew Ikle, Chief Science Officer at SingularityNet – Interview Series appeared first on Unite.AI.

Facebook
Twitter
LinkedIn

Related Posts

Scroll to Top