Early in November 2024 Cambridge Angels held one of its externally sponsored dinners; this time on GenAI: “Land Grab or Bubble”. Held at Clare Hall in Cambridge, it brought together 30 prominent academics, investors and corporate leaders active in this space.
With over 2 hours of discussions (with particular thanks to Ronjon Nag for his brilliant chairing) we have since been asked as to what its conclusions were. It would be impossible to extract a single answer but CA member, Simon Blakey did a post on some of the highlights that resonated for him and agreed that we could copy it here:
• One guest argued that AGI is here now; nobody talks about the Turing test anymore because we surpassed it. Another guest argued that this was not the case and LLMs are still only producing answers based on statistical probabilities. AGI requires abstract thought and LLMs cannot do this. They can’t reason and they have no working memory. The also don’t know how to forget.
• [It was also pointed out that self-awareness/consciousness had been debated since the time of Socrates, so we were unlikely to reach a definitive conclusion on where AGI was over dinner]
• One guest stated that LLMs are best suited where hallucination and a small amount of misinformation can be tolerated. In contrast, another guest argued that in materials science at least, hallucinations are not necessarily negative and could be used to help initiate new lines of research. An open question was also asked as to why we should hold AI to a higher standard of correctness than humans.
• One speaker thought that in 20 years there will be a path to read/write directly from computers to the brain
• The question was raised about how AI is impacting our brain. Eg there’s growing concern that children might be losing some capacity to develop and interpret micro-expressions particularly because of increased screen time. This led to a brief discussion around evolution but some guests argued that environmental evolutionary pressures have always been present on humans and AI was merely the latest of these
• Leading on from this, some discussion was had around the career changes that were being precipitated by the advent of AI, with one guest describing an interesting matrix where routine, repetitive, manual and cognitive jobs would be the first to go whilst those which were non-routine, creative and requiring a ‘human veneer’ (empathy, ethical judgement and personnel connection) would be around for longer and use AI primarily as an augmentation tool
• Some discussion was had around which market verticals investors should be focussing their attention and whether long-term value was within the increasingly commoditised LLMs or with the wrappers, training data or outputs. There was consensus that model outputs that could have IP protection, such as pharma molecules, would have real value.
• Finally, several open questions were raised; why are we wasting time and resources building AGI when our focus should be on climate and health? Can we use the tools we have now to solve these critical problems? Similarly to what extent will we use AI to make decisions for us? Will it extent to politics? Will we allow AI to decide about the best person to be in charge?
[and no, we did not discuss disinformation or the outcome of the US election…]
It was a brilliant example of the convening power of the Cambridge Angels network and we are now canvasing opinions for themes and sponsors for the next one