#39 - The seven hidden beliefs driving Big Tech (and why you should care)
93% of senior product leaders have never heard of TESCREAL. It's shaping decisions anyway.
When it comes to thinking of beliefs as systems, there’s a paradox: you do not have to believe in a particular ideology in order to be influenced by it. More so, you do not even have to be aware that it exists, let alone choose to accept or reject it. At every level — from communities to organisations to society at large — we are inescapably governed by ideas we never consented to, and often, ideas we cannot even name or identify.
Many of us are familiar with the film The Matrix, which was based on Plato’s allegory of the cave. This exposes the illusion of a simulated reality to Neo, who upon ‘waking up’ discovers a dystopian future where machines are harvesting bio-energy from humans with which to power themselves. Inside the matrix, Neo had no awareness of anything being wrong or fabricated about his reality. He could not see the machines creating their simulation and influencing his thoughts, feelings and behaviour.
Some ideologies leave identifiable traces in our systems and institutions. Living in the UK, we have at least a notional sense of how Christianity as a belief system has influenced aspects of society: from the monarch as both head of state and head of the Church of England, to shops and businesses being shut on Sunday which was regarded as a holy day of rest, to the ten commandments informing our moral infrastructure — the assumptions we hold about work, rest, guilt, individual responsibility, what counts as ‘deserving’ — upon which the law sits.
With this in mind, it came as little surprise that a poll I ran amongst senior product leaders this week revealed the vast majority had never heard of the ideology that many influential Silicon Valley elites hold dear: TESCREAL. Out of 43 respondents, 2 had heard of it and knew what it was, 1 had heard of it but didn’t know what it was, and the rest had never heard of it.
TESCREAL is shaping product roadmaps, funding priorities, and what problems get deemed 'worth solving' in tech. And most of the people building and leading these companies have never heard its name.
That’s a problem because TESCREAL consistently prioritises hypothetical futures over present realities. It concentrates power in the hands of a self-selected elite who believe they alone can see clearly. And it provides intellectual cover for present-day harms — fraud, surveillance, discrimination, exploitative working practices, environmental degradation — by reframing them as necessary steps toward a better future.
So, let’s take a closer look at what this ideology actually is.
What is TESCREAL?
TESCREAL is a neologism coined by AI researcher Timnit Gebru and philosopher Émile P. Torres. Gebru gained prominence after being pushed out of Google in 2020 for co-authoring a paper on the risks of large language models — research that proved prescient as concerns about AI bias, environmental costs, and corporate power concentration have intensified.
Undeterred after being fired, Gebru founded the Distributed Artificial Intelligence Research Institute (DAIR), with the goal of furthering her research into AI’s impact on marginalised communities. Together with Torres, they defined TESCREAL as standing for seven overlapping futurist philosophies — Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism — that tech elites use to justify and accelerate the pursuit of Artificial General Intelligence (AGI).
Let’s have a look at each of the seven components in turn now, with real examples from tech of how these play out in practice.
Transhumanism
The belief that humans should use technology to transcend biological limitations —enhancing intelligence, extending lifespan indefinitely, and potentially uploading consciousness to digital substrates.
In practice: Neuralink's brain-computer interfaces aimed at merging humans with AI; Peter Thiel's startup Ambrosia, that provides older people with blood transfusions from young donors, in order to prolong their life; the Silicon Valley obsession with biohacking and nootropics to optimise cognitive performance (hello Bryan Johnson).
Extropianism
A radical offshoot of transhumanism that rejects all limits on technological development, advocating for perpetual progress, minimal regulation, and the reversal of entropy itself through innovation.
In practice: The ‘move fast and break things’ ethos that shaped early Facebook; most of Big Tech lobbying against the EU’s AI Act; Binance’s refusal to comply with basic regulatory requirements set down by the UK’s Financial Conduct Authority.
Singularitarianism
The conviction that we’re approaching a technological singularity where AI becomes super intelligent and triggers runaway growth beyond human comprehension or control.
In practice: OpenAI’s stated mission to ensure AGI ‘benefits all of humanity’, which treats AGI as inevitable rather than a choice; the framing of AI development as a race where we must build it safely before someone else builds it unsafely — this quote from Marc Andreessen is a prime example:
"We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing [are] a form of murder."
Cosmism
The belief that humanity’s ultimate purpose is spreading consciousness throughout the universe, colonising space, and potentially resurrecting all past humans through future technology.
In practice: Elon Musk’s insistence that becoming multi-planetary is an existential imperative, which drives SpaceX and Mars colonisation efforts — framed as moral duty rather than exploration or curiosity.
Rationalism
A belief emerging from the LessWrong community that human reasoning is deeply flawed and can be debugged through systematic protocols — positioning the tech elites as uniquely capable of clear thinking.
In practice: Tech investors like Peter Thiel funding rationalist organisations and workshops; the Center for Applied Rationality running 'debiasing' training for tech leaders; the proliferation of decision-making frameworks that claim to de-bug human bias (and don’t we love a framework in Product?).
Effective Altruism
The movement claiming to identify the most cost-effective ways to do good using quantitative analysis rather than relying on implicit biases (like empathy for other’s pain), often concluding that preventing hypothetical future catastrophes outweighs alleviating present suffering.
In practice: Major EA funders like Open Philanthropy redirecting resources from global health to AI safety; Sam Bankman-Fried's justification of fraud as 'expected value calculations', claiming the ends (hypothetical future good) justified the means (present-day financial crimes).
Longtermism
The philosophical position that what matters most morally is the far future — potentially trillions of beings across billions of years — making present concerns statistically negligible by comparison.
In practice: The UK government allocated £8.5m to existential risk research while documented racial bias in police facial recognition goes unaddressed; Sam Bankman-Fried's FTX Future Fund committed $160M to longtermist causes, with him stating he was "never interested in helping the global poor."
I know kung fu
Hopefully by now you are convinced that TESCREAL not only exists, but that its influence extends beyond a fringe-group in the Valley, to hold influence world-wide. And there are many more examples: seasteading (extropianism), the Metaverse (transhumanism/cosmism), 10X engineer mythology (rationalism), 996 (extropianism) — you get the picture. Now that you have this knowledge, like Neo in the Matrix, what can you do with it?
First, let’s take a moment to acknowledge how uncomfortable this might make you feel. Learning that TESCREAL's harmful beliefs are hiding in plain sight, and likely influencing what you think and do as you work in tech can make you feel like you need to — in the words of my friend — go hug a tree. But it is possible to push back.
One way to stop the ripple effects of this ideology leaking out from Silicon Valley into the whole industry is to focus on what’s happening now. In Buddhist philosophy, the present moment is the centre of everything — it is the only aspect of our experience that is real. So what problems exist now that we should do something about? What kind of future do we want to actively construct in the now, not passively accept the version ushered in by those who don’t have our best interests at heart?
This is an important reframe, and one that leads us to hope. Hope is the belief that the future can be better than the present AND that you have some agency to make it so. It requires both desire (for a better state) and perceived capability (belief you can influence outcomes). I particularly like Rebecca Solnit’s definition from Hope in the Dark:
“Hope is not a lottery ticket you can sit on the sofa and clutch, feeling lucky. It is an axe you break down doors with in an emergency… To hope is to give yourself to the future — and that commitment to the future is what makes the present inhabitable.”
Hope requires action in the present, not passive optimism about calculated futures.
Now you know about TESCREAL, you can’t unsee it. And that means every decision you make from here — every roadmap you prioritise, every problem you choose to solve, every future you work toward — can be made consciously, in the present moment, for the people who exist right now.
Now that you know kung fu, how are you going to use it?
If you're ready to examine the invisible beliefs shaping your decisions, I work with women leaders in tech and space through executive coaching and speaking engagements.



