top of page

Giant AGI Leap: Why Rethinking the Approach Could Unlock its True Potential



We don't yet fully grasp how the brain creates intelligence, making it difficult to replicate with current AI methods. True understanding needs AI with a form of simulated embodiment. Combining the logic of symbolic AI with the adaptive nature of developmental learning models is more aligned with mirroring human development.


Focus is key to hone in on the most relevant data points for best insights. Understanding sequence and context matter to analyze things in the correct weighted order. Coupling knowledge graphs (for relationships), defined rules (for logic), and adaptability based on expert feedback.


Hand-coded models, informed by human understanding and designed for abstract reasoning, present a crucial or potentially superior path toward creating AI capable of real-world intelligence

The development of Artificial General Intelligence (AGI), capable of achieving similar intellectual capacity to humans, has remained an elusive goal. While machine learning (ML) and deep learning (DL) have made impressive progress, there's compelling evidence suggesting that hand-coded models engineered with human insight might be crucial or even the best way to make AGI a reality. Here's why:


Challenges and Limitations of Current AI Approaches


  • Inefficiency and Poor Outcomes: High failure rates of AI projects, unreliable insights, and struggles with data quality and model accuracy highlight inherent shortcomings of the purely data-driven ML approach. (Gartner, KPMG, IDG, IBM)

  • Over-Reliance on Data: An overwhelming abundance of data and the energy demands associated with processing it are becoming unsustainable. This dependency on massive datasets for limited results could signal a fundamental problem with current AI development trajectories. (Gadi Singer, Yann LeCun, Geoffrey Hinton)

  • Expertise Bottleneck: The limited number of specialized AI experts creates project backlogs. Distributing the design process could improve deployment speed, reduce bias, and leverage a wider variety of human expertise. (LinkedIn, McKinsey, Google)


What's needed


  1. Meta-Model: For constructing child models and problem-solving frameworks, indicating an emphasis on structured knowledge representation and adaptability.

  2. Pathfinding: To find efficient solutions promotes goal-oriented behavior and problem-solving, mirroring human cognition.

  3. Human Input: Human-defined segments and input supports a balance between data-driven learning and expert insight.

  4. Cognitive Architectures: Focussing on replicating the structure and processes of the human mind with a more comprehensive map of human cognition when solving problems.

  5. Hybrid System Thinking: Combining symbolic AI systems (rule-based, logical) with connectionist approaches (neural networks) for deep learning and reasoning.

  6. Bridging the gaps between data-driven insights and abstract knowledge representation.

'On the Future of AI', Hinton believes that significant progress will require revolutionary thinking rather than incremental improvements on existing techniques.

Comments


Commenting has been turned off.
bottom of page