What's Happening with AI?
How years of unscientific AI hype created a financial time bomb and how it could effect banks as well as the future of tech. Not generated by AI, citations are my own.
TLDR: What is really going on?
Evidence suggests the current wave of AI hype (which has been out of control for over 2 years now) may have been deliberately amplified to either crash banks or otherwise trigger a large-scale cryptocurrency adoption for injection into the global economy, which is in addition to providing the AI slop permeating the internet that our digital experience has become.
Wait what?
AI will always be around, just hated more in the near term. To be clear, ideal applications for AI are probabilistic, and can excel if used at a high level and with critical thinking (similar to how Tesla drivers see image recognition pop in and out on their touch screens), but not for low-level deterministic tasks (due to errors that will always exist).
Since the US tech industry has accomplished great things in my lifetime, I’ve known AI as the new name for what was once “machine learning” over the last 10 years and it’s been personally disappointing to see it implode. I’ve been using a significant portion of my therapy sessions to find productive ways to deal with frustration and anger towards armchair experts in positions of power (as well as navigate an increasingly audacious clown show of naive new residents to San Francisco who had followed the hype).
The facts I’ve learned over the last 2 years:
An illusion of growth - for 2 years I’ve queried hard data on multiple horizons of AI development activity - as each cohort churns through attempted AI implementations (e.g. trials of the technology), these horizons have since given way to more new global participants in the hype (and another top forms before they also churn off).
Bot accounts have flooded social media with a non-scientific AI hype narrative (bots which also apparently caused the Cracker Barrel rebranding fiasco). Initially written off as a conspiracy theory, it appears that it’s been happening and is now a large portion of the internet.
Venture capitalists (and their banks) have been using an inaccurate calculation of annualized recurring revenue (ARR). They commonly use only 1 month of revenue to extrapolate 1 year as an inflated baseline that implies subscribers never churn.
Depreciating AI infrastructure as detailed in this blog post from Princeton that is purchased by (or sometimes used as collateral by) loans that are given using these inaccurate valuations (see above).
Organized spread of AI misinformation by crypto investors. Sam Altman at OpenAI and Dario Amodei at Anthropic have been supporting fear-based AI narratives that warn of “catastrophe” if we “cannot control it” (even though it’s probabilistic). They are affiliated with Effective Altruism, an organization funded by venture capitalists associated with crypto (another major backer was Sam Bankman-Fried, who is in prison for large scale cryptocurrency fraud at FTX) - they have not yet recouped their investments made in 2020 and 2021 and continue to take out loans to fund their vision of a global currency shift.
The Japanese Yen’s historically low value has allowed for the carry trade to increase the P/E ratios of technology stocks, which will likely reverse in due time.
Ponzi scheme financing for further funding rounds. Each successive funding round that increases valuations of AI startups lately appears to be aggressively sold in chains of opaque financial products called special purpose vehicles (SPVs) that allow for a facade of legitimate investment activity, obscure the flow of funds, and hide liabilities from investors and regulators.
Circular financing of both public and private companies. Recent deals among AI companies that imply revenue has been a facade have been widely spread on social media in the past month.
These loans could default within the private credit market as the absurd valuations for AI startups do not return the investment on loans or default (some loans to other industries appear to be AI-driven, but this might a tangential issue).
What might seem at first trivial, the newly appointed FDIC chair supports decentralized technologies and could likely have plans to mitigate any resulting risk through cryptocurrency, to an unknown scope in the long term.
Finally, the nail in the coffin that dissuades any success from an AI pivot: I’ve observed most employees at AI startups in SF missing basic critical thinking skills. According to unnamed recruiters, many are hired for what venture capitalists believe are various indicators for being “AI native,” as if this is something that exists and would benefit performance. The average age of these employees appears to be below 25, and most appear to be referred by venture capitalists after legacy admissions to Ivy League schools rather than from a meritocracy, or are otherwise [new] immigrants from India that are cheap investments (not getting into immigration issues). While it might sound like a negative view, I’ve witnessed enough alarming events to support my (subjective) hypothesis that most AI startup employees in 2025 may actually need the coddling of their parents to be informed about choices they make after any pivots, and this does not bode well for the tech industry (or the loans given to AI companies that try a pivot after the hype fades).
As a result, we are left with 2 things:
An initially great technology that allows for a very cool semantic (and probabilistic) search of training data scraped from the internet (providing a “reversion to the mean” response to the complexity of your initial prompt, most of which is sycophantic) that is now marred by hype and misinformation, with misunderstood anthropomorphic output leading some people to AI psychoses, and further issues in developing discernment that lead to child safety issues (an area I’ve been focused on scholastically). Also contributing to AI slop and spam.
Likely failing banks that could be forced to adopt cryptocurrencies in an unknown scale and timeline.
Many unknowns remain:
We don’t know how any cryptocurrency adoption as a function of this situation will end (will it ultimately fail as pump and dump schemes it’s become known for, and if so, how large would the scope of that scheme grow). Or, will it ultimately be adopted as a result? And would that adoption be completely bad, or is there any good in that?
We also don’t know if the astronomically stated costs of datacenter infrastructure (probably not needed) is ultimately being used as some kind of veiled excuse to justify spend on energy infrastructure that could be repurposed.
We also don’t know if honesty about AI limitations will prevail, but precedent tells us it won’t. If it doesn’t, my hypothesis (subjective) is that it will be entertaining to hear the continued excuses from tech executives (e.g. “if only we had enough power to handle the intense demand”) for building infrastructure for unknown reasons (whether it is global warfare, or energy efficiency to compete globally - time will tell). As of now, the high level of opacity suggests it’s not going in a good direction.
We also don’t know how any of this valuation bubble popping will be framed, if not transparent - it could be veiled in “tariff negotiation” drama that has become common over the last few months, but regardless it will almost surely be accompanied by the reverse carry trade of the Japanese Yen that appears to have allowed for financing of loans in the tech industry.
Looking back, how did we get here?
Rough timeline below, feel free to add more in comments since I believe we’ll be looking back on this for awhile. Not getting into politics or referencing the 2024 US election for reasons that avoid animosity.
2023: Most businesses that adapted quickly learned of its limitations, spend decreased while others still had to start spending (my professional world of rigorous ROI measurement frequently met resistance this year from CIOs who believed the social media bot chatter of limitless ROI).
2023 - 2024: US government spend on AI finally increased in October, due to delays from big government processes - my earlier blog posts on AI usage peaking did not see this coming, nor did it track government usage of AI.
2024: Slower moving businesses started adapting AI to (also) learn of its limitations - an example is AI bots on homepages (e.g. “ask AI”), apparently from aging corporate boards who were worried about “falling behind” more than thinking critically.
2025: With the current US government in place appearing to help this narrative, non-US governments appear to have increased spend to counter US government spend from previous administration before the new federal budget started in October 2025.
2025: Deregulation appears to have allowed for questionable financing circles that act as a precursor to the bubble popping, no matter how any decreased valuations are framed (e.g. is it China’s fault, is it tariff negotiation drama, or someone else’s besides non-scientific claims of tech executives before they retire).
Where will this go from here?
After the hype subsides, it will allow for creativity to roam and become a great tool, in time. Otherwise, we are back to using machine learning and not AI. Stay tuned..



