Skip to main content

BREAKING – China’s Deepseek Won the “AI War” – Now what?

Watch or Listen

Episode Summary

China and the Open Source movement have kicked the AI market’s butt! Tim discusses the recent release of DeepSeek, an open-source AI model from China that has disrupted the market, wiping off $1 Trillion from the stock market.

He emphasises the growing dominance of open-source AI, the need for businesses to avoid long-term contracts with AI providers, and the importance of building ecosystems around AI models rather than committing to expensive proprietary solutions.

Tim also addresses the market’s reaction to the rise of cheaper AI alternatives and critiques the narrative that regulations stifle innovation, arguing instead that innovation thrives under constraints.

Join the conversation now because it’s getting HOT in the world of AI!

References and Citations

Tags

Share this

More like this…
Trending Now

Continue reading

Why 2025 Is The Era of Internet of Sh*t (IoS)

Watch or Listen

Episode Summary

Tim El-Sheikh opens the year 2025 with reflections on the state of the world, his own habits, and the challenges ahead. After a much-needed holiday break, he notes that while many entrepreneurs and business leaders are expressing cautious optimism, to the extent that, to him,  it feels eerily similar to the mood before the 2008 financial crash.

The uncertainty surrounding the economy and technology markets makes it difficult to feel confident, especially when so much remains unknown. Once full of utopian aspirations, conversations about artificial intelligence (AI) have turned into debates about regulation, ethical responsibility, and the fallout of political decisions.

Tim feels that the rapid pace of politically-driven change is materialising even faster than he predicted, and he can’t shake the sense that we’re headed into a stormy period.

He reflects on Meta’s recent announcement to remove moderation policies, which he believes is a step backwards, especially given the platform’s troubled history with harmful content. Cases of teen depression, self-harm, and suicides have been linked to social media platforms like Meta’s, and without safeguards, the situation could worsen.

Tim also draws a parallel between Meta’s shift and the broader trend of deregulation in the U.S., arguing that these decisions prioritise profit and ideology over the well-being of users, especially vulnerable ones. The idea that CEO who run social media platforms can be “free speech absolutists” ignores the global nature of these networks and their far-reaching consequences.

Amid this pessimism, Tim sees a glimmer of hope in Europe, which he believes has an opportunity to lead in responsible AI development. However, he tempers this optimism with realism, noting Europe’s history of mishandling its potential. Brexit, for example, crippled London’s position as a global AI hub, and without deeper collaboration between the UK and the EU, it’s hard to imagine a return to that status.

Still, Europe’s focus on regulation and responsibility stands in contrast to the U.S.’s increasingly deregulated tech landscape, and this difference could define the future of innovation.

Tim also reflects on the evolution of the internet itself. He fondly recalls the early days when it was a space for information sharing and genuine connection, lamenting how social media has transformed it into a space dominated by a few major platforms.

While social media has its benefits, the lack of effective moderation, especially on Meta’s platforms, threatens to turn these spaces into chaotic, harmful environments. The global nature of these platforms makes their impact even more troubling, as they often ignore the diverse needs and concerns of their international user base.

So much more to discuss. Join the conversation now.

References and Citations

Tags

Share this

More like this…
Trending Now

Continue reading

How a Tech CEO and ex-Pro Athlete Overcame Loneliness and Isolation

Watch or Listen

Episode Summary

In this latest episode, Tim El-Sheikh gets a bit personal!

He delves into the pressing issue of loneliness and isolation, something that has become a rising global concern affecting people of all ages.

He was triggered by the latest report highlighting that 50% of Britons will face a lonely Christmas in 2024 (please take a look at the citations below).

Drawing from his own experiences as a tech entrepreneur, pro athlete, and teenager who faced dark times, Tim shares the pivotal moments in his life that led to feelings of loneliness, isolation and depression and the actionable steps he used to overcome these challenges.

He emphasised the importance of identifying the root causes of isolation and taking intentional steps to reconnect with the world. For Tim, sports played a crucial role in connecting with others and maintaining a positive mindset. But he also admitted that it was not a simple process and takes a form of “mental training” over time.

Tim also warns against over-relying on digital communication and AI chatbots and encourages face-to-face interactions to foster genuine connections.

This episode isn’t just about Tim’s story but a direct call to action for you to recognise and address your own experiences with loneliness.

Join the conversation, share your strategies, and explore the resources linked in the citations section below.

References and Citations

Tags

Share this

More like this…
Trending Now

Continue reading

Valeria Sadovykh: Your Cognitive Decline from Digital Amnesia

Watch or Listen

Episode Summary

In this episode of the CEO Retort podcast, we welcomed Dr Valeria Sadovykh, a leading technology strategist at Microsoft, sharing her insights on the intersection of human behaviour, technology, and AI.

She discussed technology’s profound impact on cognitive functions and decision-making processes in modern society. She highlighted how integrating technology in daily life has led to what she terms “lazy decision-making.”

This phenomenon occurs when people stop critically processing information, relying instead on readily available online solutions. This reliance on technology, she argues, diminishes our cognitive abilities, affecting our capacity to focus, memorise, and communicate effectively.

Throughout our discussion, Tim and Valeria explored the dual role of technology – while it offers substantial benefits, such as enhanced efficiency and access to information, it also poses risks by fostering dependency and potentially eroding essential human capabilities.

Valeria emphasised the responsibility of individuals and societies to manage their technology use consciously to ensure it serves rather than dominates. She also touched on her personal and professional journey, explaining how her experiences and research have shaped her understanding of these issues.

The discussion delved into action plans for more critical engagement with technology, where both Tim and Valeria urge listeners to consider the long-term implications of their digital habits on their cognitive health and societal well-being.

Valeria called for a balanced approach to technology, advocating for the development of “soft skills” that technology cannot replicate, such as emotional intelligence and interpersonal communication.

Join the conversation now and tell us what you think and how your use of technology impacted your cognitive functions.

Our Favourite Quote from This Episode

I wondered why people outsource their most important decisions about their lives and their health to the wisdom of crowds when this wisdom only gets it right 50% of the time or less!

References and Citations

About our Guest

Dr Valeria Sadovykh

CEO Retort guest profile: Valeria Sadovykh

After arriving in Auckland as a solitary teenager from Eastern Europe and spending 11 years studying part-time while working to pay her way, Dr Valeria Sadovykh is now shaping thinking on the use of AI at the world’s largest software company, Microsoft.

Valeria is passionate about solving societal challenges with innovative and responsible AI. As a scholar, speaker, and author, she has researched and educated on socially responsive AI, decision-making and decision intelligence, using AI for good and social innovation.

Tags

Share this

More like this…
Trending Now

Continue reading

The Future of AI Under Trump and Project 2025

Watch or Listen

Episode Summary

Tim El-Sheikh gives an in-depth analysis of the future of AI under Trump’s presidency following the 2024 US elections.

He compares the US’s potential economic downfall under Trump to the UK’s experience under the Conservative party and Brexit, cautioning that history might repeat itself.

Tim argues that AI is in a precarious situation under Trump’s administration, predicting a shift from AI that serves humanity to AI that serves shareholders and surveillance capitalism. He further criticises the notion that regulation stifles innovation, arguing that challenges and regulations actually spur creative solutions and better innovation.

Discussing the potential impact on the tech industry, he notes that right-wing governments traditionally do not invest in innovation and often weaponise government against tech.

Tim predicts a potential economic catastrophe if tech talent leaves the US, but also sees this as an opportunity for countries like the UK to revive their tech sector.

Despite the grim outlook, Tim encourages listeners to remain proud of their identity and unite to reclaim their country.

Do you agree? Join the conversation now.

References and Citations

Tags

Share this

More like this…
Trending Now

Continue reading

Can DeepMind’s Nobel-Prize Boost DeepTech Businesses?

Watch or Listen

Episode Summary

In the latest episode of the AI Geeks Podcast, Tim delves into the extraordinary achievement in the artificial intelligence (AI) community with the 2024 Nobel Prize announcements. This year, the AI field celebrated four Nobel laureates, highlighting the profound impact of AI on global scientific advancements.

A significant highlight was Demis Hassabis of DeepMind, who clinched the Nobel Prize in Chemistry. Tim shares a personal connection with Google Campus, where he was part of the same first-generation cohort of London’s Silicon Roundabout as Hassabis and many other innovators. He reflects on the early days of AI’s integration into significant scientific research and its evolution to achieving such prestigious recognition.

The discussion also touches on the broader implications for deeptech startups. Tim shares his insights on the changing landscape of venture capital investment in technology. He reminisces when deeptech companies, grounded in solid scientific research, were the primary beneficiaries of funding. However, he notes a shift towards more hype-driven investments in less groundbreaking projects.

Tim argues that true innovation requires time and substantial investment in deep research, which doesn’t align well with the fast returns expected by modern venture capital models. He suggests that governments should play a more proactive role in funding and supporting deeptech initiatives to foster real innovation that can tackle humanity’s most pressing challenges.

Moreover, Tim shares his concerns about the hype overshadowing substantial technological advancements, emphasising the need for a more ethical and responsible approach to AI development. He calls for a return to innovation that genuinely serves societal needs rather than just focusing on creating investor returns.

Tim encourages a broader discussion on the role of government in technological development and the need for a supportive ecosystem that allows deeptech startups to thrive and contribute meaningfully to society. Join the conversation now and tell us what you think.

References and Citations

Tags

Share this

More like this…
Trending Now

Continue reading

Raoul Lumb: How AI Models Violate Copyright

Watch or Listen

Episode Summary

We welcomed Raoul Lumb, the leading UK media and technology lawyer specialising in areas crucial to the AI industry, such as intellectual property, copyright, and data protection.

Raoul shared insights from his extensive experience representing tech founders and companies navigating the complex landscape of technology law.

The Legal Landscape of AI: Challenges and Insights

Raoul highlighted the pressing need for global harmonisation in AI regulations, stressing that AI should not be underestimated in its impact and requires a thoughtful approach rather than a perfectionist, siloed strategy that might stifle innovation.

He points out the dangers of the current trends where big tech companies and AI startups push users to compromise on data privacy to train their AI models, often without transparent consent. The fact that data is available online does not mean it is yours to use!

Ethics and Legalities in AI Usage

Throughout the podcast, Raoul discussed the ethical implications and legal challenges that come with the development and deployment of AI technologies. He emphasises the importance of understanding the nuances of intellectual property rights in the AI context, especially as companies often leverage user data and copyrighted material under vague terms of service, potentially leading to exploitation or misuse.

Startup Challenges and Data Protection

For tech startups, Raoul underscored the pitfalls of failing to establish clear policies on data usage and intellectual property from the outset. He noted that many startups struggle with securing investment or navigating exits due to inadequate attention to how they handle data, particularly when training AI models.

The Future of AI Regulation

Looking ahead, Raoul expressed concerns about the potential for AI to automate jobs at scale, leading to significant socio-economic shifts and possibly lower standards in some industries. He called for thoughtful regulation that balances innovation with protections against misuse and considers the broader implications on employment and societal norms.

Join the Conversation

This discussion serves as a crucial reminder of the complexities and responsibilities inherent in advancing AI technology. As AI continues to permeate various sectors, the need for robust, clear, and fair legal frameworks grows increasingly urgent, ensuring that AI development aligns with ethical standards and respects user rights without stifling innovation. But are we actually seeing this today? Join the conversation now.

Our Favourite Quote from This Episode

The fact that someone’s data is available somewhere like LinkedIn does not mean it’s an all-you-can-eat buffet of data about people.

References and Citations

About our Guest

Raoul Lumb

The CEO Retort Guest Profile: Raoul Lumb

Raoul is a Partner in the Technology, Corporate & Commercial teams. He is a specialist commercial technology lawyer who helps his clients with a wide range of contractual, data protection and intellectual property issues. His clients range from start-ups to listed multinational companies and include software developers, digital agencies, virtual reality producers, crowdfunding platforms and online gaming services.

Raoul has experience representing FinTech clients and negotiating software licencing deals with most of the world’s major banks as well as representing clients in negotiations with European manufacturing companies and research institutions – including advising on the supply of robotics hardware to the European Organization for Nuclear Research (“CERN”).

Raoul regularly appears on the BBC as an expert commentator on technology issues.

Tags

Share this

More like this…
Trending Now

Continue reading

The End of AI Utopia Thanks to Dumb Politics

Watch or Listen

Episode Summary

Tim El-Sheikh discusses the pressing issue of how AI technology is being used and who it’s serving. Tim is very concerned about the current trajectory of AI development, suggesting we are moving towards an AI dystopia rather than the promised utopia some tech CEOs and VCs often talk about.

Tim highlights several disturbing news stories and behaviours in the tech industry. He argues that AI has the potential to solve numerous problems in various sectors, from healthcare to government to the environment, but political interference often hinders these possibilities. He criticises the politicisation of AI and the lack of regulation, suggesting that powerful entities who are anti-regulation are gearing up to misuse AI for their benefit.

He delves into the political shift in Silicon Valley towards supporting Trump and the potential repercussions of their inevitable support of Project 2025 – a policy wishlist for the next Republican president which proposes to expand presidential powers, impose an ultra-conservative social vision on the US, and drastically overhaul several federal agencies. Tim argues that supporting such a project is amounting to supporting a dystopian future for AI.

He also calls out tech entrepreneurs and VCs who advocate for “open content” on the web, arguing that it’s a misleading term used to justify copyright infringement.

Tim also discusses the idea of a unified global framework for regulating AI, similar to regulations around nuclear energy, to avoid potential catastrophes. Tim expresses scepticism about the over-hyped AI utopia due to these issues and reminds viewers and listeners about the dangers of misinformation, AI-generated revenge porn, and extortion.

Is there a solution to this grim reality?

Watch and listen to the discussion and join the conversation.

References and Citations

Tags

Share this

More like this…
Trending Now

Continue reading

Meta Destroys Closed AI Models with Llama 3.1

Watch or Listen

Episode Summary

Tim delves into the implications of Meta’s release of their open-source large language model, Llama 3.1. He suggests that this development could potentially signal the beginning of the end for commercial large language models.

Tim starts by acknowledging his past criticisms of Meta (formerly Facebook) and its CEO, Mark Zuckerberg, particularly around privacy issues. However, he commends Meta for this latest move, stating that it’s a significant step in the right direction. He believes that open-source models like Llama 3.1 may outperform commercial models like GPT-3 and GPT-Cloe, offering a more cost-effective solution for developers.

Tim highlights that Meta’s move towards open-source AI is beneficial for several reasons. It allows organisations to train, fine-tune, and distil their own models, giving them control over their AI destiny without being locked into closed vendor systems. It also helps protect data and offers a model that is efficient and affordable to run.

However, Tim also points out a significant concern: the lack of transparency about the data used to train Llama 3.1. He argues that without this information, it’s challenging to fully consider Llama 3.1 as open-source. He also mentions the potential copyright issues that could arise from the use of certain data sets.

Despite these concerns, Tim believes that the release of Llama 3.1 could be a game-changer for the AI industry. He predicts that businesses might start demanding open-source language models, which could lead to a shift in the market. He suggests that Meta could potentially dominate the business market with generative AI, while Apple could dominate the consumer market.

Join the conversation to explore the rise and opportunities of Open Source AI models.

References and Citations

Tags

Share this

More like this…
Trending Now

Continue reading