DeepSeek’s $294K Model Training: How China Shattered the Economics of Frontier AI

DeepSeek’s $294K Model Training: How China Shattered the Economics of Frontier AI

Why the World’s Cheapest State-of-the-Art Language Model Signals the End of Silicon Valley’s Billion-Dollar AI Monopoly

TL;DR:

The West’s perceived distrust of Chinese technology is proving to be an unexpected asset for Chinese AI companies. They are converting that distrust into transparency, peer review, and open source, leaving critics with no ground to stand on.

DeepSeek's $294K breakthrough is not only about cost; it is about transparency. As the first major AI model subjected to peer review with open weights, it exposes Big Tech’s closed, unauditable approach as a choice rather than a necessity. While Western companies burn hundreds of billions on secretive development, China has seized the lead in global open-source AI adoption. DeepSeek demonstrates that frontier AI can be built transparently, deployed with sovereignty, and accessed freely, ending the era when “trust us, it is too complex to verify” was considered acceptable AI governance.

Big AI investors must now demand auditability, as the sums being spent to build a global monopoly should already be sounding alarm bells. At the same time, any company or country can afford to build competitive models at a fraction of historic costs. The new benchmark is clear: peer review, open documentation, and verifiable claims. Any AI model that fails to meet this standard is no longer an asset but a liability.


This article is part of an ongoing series dissecting the global AI realignment, and exposing how Western narratives often obscure what's really happening in China and other markets. If you’ve read “No, Digital Sovereignty Doesn’t Mean You’re Anti-America or Pro-China: It’s Anti-Dependence,” you already know the framing: this isn’t cheerleading for China. It’s about recognizing reality, rejecting dependency, and reclaiming control.


Opening Reality Check

In 2025, OpenAI burns $22 million every day, enough to train 75 DeepSeek R1-level frontier models daily.

Published disclosures from DeepSeek confirm DeepSeek’s entire SOTA model cost just $294,000 to train.

Big AI’s business model is a textbook case of industrial-scale financial recklessness: relentless capital incineration, no path to profitability, complete opacity, and monopoly power propped up by cartel-driven hype. It is engineered for rent extraction and enforced dependency, not genuine innovation or public value.


Introduction

On January 27, 2025, Silicon Valley’s greatest illusion ended. Chinese developer DeepSeek released its R1 model to the public, triggering an unprecedented shockwave. Within hours, investor panic wiped $589 billion off Nvidia’s market value, as global financial markets questioned the future of AI hardware margins and the dominance of U.S. platform leaders. Industry incumbent, including OpenAI, Meta, xAI and Google, scrambled to respond, as the sector absorbed the collapse of the AI price wall and the exposure of “cartel economics” as a bluff.

OpenAI’s projected burn rate is not flat. Estimates differ, but all confirm that spending is accelerating rapidly:

  • 2025: ~$22M/day
  • 2026: ~$47M/day
  • 2027: ~$96M/day
  • 2028: ~$129M/day

Sources: The Information, CNBC, TechSpot, The Decoder.

Even now, the Big AI cartel continues to burn unsustainable amounts, tens of billions of dollars in capex and compute, chasing scale without profitability, doubling down on the narrative that such investments are the price of entry to the frontier. OpenAI alone projects a staggering $115 billion cash burn through 2029, an $80 billion increase from earlier forecasts that assumed break-even by then. The company expects $5 billion in losses on just $3.7 billion revenue in 2024, a deficit that will only worsen as spending accelerates exponentially.

The true scale of this spending spree becomes clear when examining the year-by-year escalation: OpenAI's daily burn rate will explode from ~$22 million in 2025 to $47 million in 2026 (+112%), then $96 million in 2027 (+106%), reaching ~$129 million by 2028, a 5-6x increase over four years. This isn't gradual scaling; it's compound acceleration toward financial catastrophe. Headlines citing static averages like "$80M/day" deliberately mask this exponential trajectory, concealing burn rates that more than double year-over-year while profitability remains a distant mirage.

Meanwhile, Meta, Amazon, and Microsoft collectively plan to spend $320 billion on AI in 2025 alone, with Amazon targeting $100 billion and Microsoft dedicating $80 billion to datacenters. The entire Western AI establishment is locked in an unsustainable arms race, burning unprecedented capital while generating minimal returns, all to justify a narrative that DeepSeek just demolished for $294,000.

OpenAI CEO Sam Altman has publicly stated that training GPT-4 cost more than $100 million. Leading industry analysts and AI executives now expect the next generation of frontier models (such as GPT-5, Gemini, and Claude 4) to cost $500 million, $1 billion, or more per training run, with Dario Amodei (Anthropic) and others projecting the era of $10 billion models within the next few cycles.

The old Sovereignty Cartel (deliberately calling it what it is) narrative was decimated in an instant on January 27, 2025. For years this cartel —a self-reinforcing alliance of dominant platforms, government enablers, and regulatory gatekeepers, protected by state power—used firewalls, choke points, and favourable regulation to secure dominance, crush alternatives, and enforce dependency. It is not a loose network or industry trend but a coordinated system built on collusion, mutual interest, and political enforcement. If deployed within a single nation, antitrust regulators would call it monopolistic collusion. Globally, it evades scrutiny by exploiting jurisdictional gaps and regulatory fragmentation. That claim collapsed the moment DeepSeek disclosed its costs. All you have to do to see the proof is to see President Trumps recent blah

For months after launch, DeepSeek refused to disclose any official cost or compute figures. Analysts and Western competitors widely assumed R1’s breakthrough required massive infrastructure, estimated by analysts and industry at 2,000 GPUs and $5.6 million in training costs.

On September 18, 2025, DeepSeek put to rest any uncertainty of the costs. In a peer-reviewed Nature article and direct statements to Reuters, disclosed an unprecedented breakthrough:

DeepSeek-R1 was trained for just $294,000 in compute costs, using 512 Nvidia H800 GPUs over 80 hours. 
This stands in stark contrast to the $100 million or more reportedly required to train foundational OpenAI models like GPT-4 —a 340-fold reduction, just 0.294% of the cost .

Future U.S. and Western models are openly forecast to cost hundreds of millions, possibly billions, per training run, yet none are profitable at scale. DeepSeek’s disclosure has turned this escalation into an indictment of the Big AI business model itself. With every new cost revelation, Big AI’s ability to justify these spiraling expenses, against a backdrop of chronic unprofitability, becomes increasingly difficult to defend.

While DeepSeek has now gone on record with their cost, Big AI only hint in comments at their numbers, nothing official has been released on record. No major U.S. or Western AI vendor has provided independently audited cost figures. With that said, around $100 million is the floor for GPT-4, and costs for next-gen models are spiraling far higher, as even Altman and Amodei now acknowledge.

Transparency: Peer Review as the New Baseline

After DeepSeek’s public release in January, the US government and Big AI vendors immediately pivoted the narrative. The original claim that only US-based Big AI firms could develop and operate frontier-scale models was replaced almost overnight with a new national security framing. Over the following nine months, this line was aggressively pushed through a coordinated campaign of exclusion, lobbying, and misinformation, presenting open Chinese models as untrustworthy and justifying refusal to allow independent audit of US models. 

The strategy held until 18 September 2025, when DeepSeek’s latest disclosure fundamentally altered the landscape. Instead of isolating Chinese AI, the Western approach forced Chinese vendors to respond with radical transparency and full peer review. 

By publicly releasing every referee report, technical detail, and the verifiable $294K training cost, DeepSeek eliminated any plausible justification for Western opacity, proprietary control, or national security pretext. 

The West’s attempt to weaponise distrust backfired: Chinese models now set the global benchmark for peer review, technical openness, and cost auditability. In this new reality, any model or vendor hiding behind secrecy or “commercial sensitivity” is signalling strategic vulnerability, not advantage.

DeepSeek’s cost disclosure did more than reset the economics of foundation models. It established a new global baseline for transparency, scientific audit, and procurement credibility in AI. For the first time, a commercial-scale large language model was subjected to full independent peer review, with every referee report and author response released for public scrutiny in Nature.

This now has immediate operational and regulatory consequences across the sector.

  • Peer review is now a requirement for any credible AI deployment, procurement, or regulatory approval.
  • Buyers, boards, and regulators can now demand full peer review packs, technical documentation, audit trails, compliance mapping, and red-team reports as prerequisites.
  • Public peer review exposes shortcuts, collapses black-box claims, and breaks vendor lock-in built on secrecy.
  • Any vendor unable or unwilling to submit to independent, public review should be excluded from serious consideration: refusing is a direct signal of risk, technical weakness, or lack of credibility.
  • The new rule is explicit: no peer review, no trust, no sale. This is the baseline for all future AI models and vendors.

DeepSeek has proven cost, efficiency, and open validation can coexist at scale. The burden of proof has shifted: press release claims and NDAs are now operational liabilities, not assets.

DeepSeek-R1’s publication disclosed technical details—including the use of Nvidia H800 GPUs for the main training run, overall cost structure, methodology, and safety practices. Supplementary materials clarified that A100 GPUs were used for pre-training only; the full training run used H800s, as confirmed by Nvidia and in the peer-reviewed documentation. Allegations from U.S. officials of H100 use have not been substantiated by any public evidence.

DeepSeek also openly addressed claims about model distillation. Earlier models were distilled from Meta’s open-source Llama, and V3’s training data included web content containing OpenAI-generated answers as an incidental effect, not as direct copying. No peer-reviewed challenge or contradictory evidence has been published.

The difference in transparency is now definitive.

  • DeepSeek-R1: open weights (MIT license), open peer review, technical documentation, and public safety disclosures.
  • U.S. and European models from OpenAI, Google, and Anthropic: no peer-reviewed publications, no reviewer reports, no audited cost disclosures, closed weights.
  • Mistral: open weights, but without independent peer review.

Chinese models such as DeepSeek R1’s open weights have now been downloaded more than 300 million times on Hugging Face (see Figure 1: Cumulative Downloads), with sovereign and enterprise deployments documented across Asia, the Middle East, and Europe. The chart illustrates exponential adoption over just a few months, underscoring the rapid diffusion of open, peer-reviewed models compared to traditional Big AI offerings. For procurement, regulation, and legal compliance, particularly under frameworks like the EU AI Act and emerging standards in Asia and the Middle East: public peer review, full technical documentation, and open validation are now rapidly becoming operational requirements.

The result is an irreversible transparency standard. Arguments about “proprietary secrecy,” “commercial sensitivity,” or “audit infeasibility” are now obsolete. Buyers, boards, and regulators can demand public peer review, full technical documentation, compliance mapping, and red-team reports as prerequisites for deployment or approval. Any vendor refusing these requirements or hiding behind PR or NDAs is now a liability, not a strategic partner.

This is the new baseline:

  • If a model is not peer reviewed, it should not be procured or approved.
  • If the peer review is not public, it is not credible.
  • If a vendor will not submit to open scrutiny, it should be excluded.

DeepSeek’s disclosure and peer-reviewed release have reset the global standard. From this point forward, proof—not promise—is the foundation of trust in AI. This is now the operational reality for every credible actor in the sector.

Open-Source models from China are set to overtake the US as downloads source in usage across Huggingface.

The impact of DeepSeek’s open, peer-reviewed release is not theoretical, it is visible in the global adoption curve for open AI models. As shown below, cumulative downloads by region crossed a historic threshold in the second half of 2025: China overtook the United States as the world’s leading source and consumer of open language models. The EU remains well behind both leaders.

Article content

Figure: Cumulative open model downloads by region (source: interconnects.ai, H2 2025).

The chart illustrates exponential adoption over just a few months, underscoring the rapid diffusion of open, peer-reviewed models outside of the US based models.

Until this year, the U.S. held a persistent lead, driven by first-mover advantage, developer concentration, and broad industry adoption. That dominance ended as China’s open model ecosystem, led by DeepSeek, Qwen, Kimi, and others, accelerated past the U.S. in both raw downloads and operational deployments. The inflection point (“the flip”) signals not just a statistical milestone, but a redistribution of real global capability.

  • China’s surge is rooted in open licensing, transparent peer review, and rapid adoption across both commercial and government sectors.
  • The U.S. curve, while still rising, is now outpaced—constrained by proprietary models, slower transparency adoption, and tightening regulatory friction.
  • The EU, despite regulatory activism, remains a distant third, reflecting lower model release volume and more cautious market uptake.

The implications are clear:

The centre of gravity for open AI model development and adoption has shifted decisively. What began as a cost shock has become a full-scale ecosystem disruption. Open, peer-reviewed Chinese models have not only redefined technical possibility, but have rewritten the geopolitical logic of the global AI landscape. This is no longer a story of U.S. leadership merely being contested, it is a wholesale transfer of momentum, now objectively measurable in adoption rates, operational deployments, and the pipeline of sovereign models under development worldwide.

Any company, government, or research consortium with moderate resources and technical competence now possesses the capability to build and iterate state-of-the-art models at a fraction of the historic cost.

The former monopoly of U.S. and Chinese hyperscalers has collapsed. In Europe, the post-Mystral era is already taking shape: multiple national and EU-funded open foundation models are moving from concept to production, including French, German, and pan-European initiatives designed for both sovereign deployment and sectoral adoption.

The Middle East is next. GCC states, including the UAE, Saudi Arabia, and Qatar, are mobilising billions in targeted investment, with sovereign cloud and language model projects now in advanced stages. The momentum is clear: regional and linguistic sovereignty are being hard-coded into national digital strategies, and the era of dependency on foreign AI infrastructure is closing.

Crucially, this shift is not limited to the major economic blocs. The Global South, Southeast Asia, Africa, and Latin America are all poised to enter the frontier model arena, enabled by falling costs, open weights, and transparent methodologies. In the last twelve months alone, more than a dozen sovereign or sector-specific large language models have been announced or launched outside the U.S. and China, from Brazil’s BERT-based agri-tech models to Indonesia’s Bahasa-focused AI stack.

The technical and economic barrier, once enforced by cost, access, and opaque supply chains, is gone. The result is an irreversible globalisation of AI capability. In this new landscape, any credible actor (public or private, large or small) can build, adapt, and deploy advanced AI infrastructure. The competitive field is now global, plural, and unbounded by legacy gatekeepers. This is not just a redistribution of usage; it is a structural realignment of digital power, innovation, and sovereignty.

Impact on the Big AI Narrative and Global Influence Playbooks

DeepSeek’s $294K breakthrough has done more than shift costs and adoption curves, it has fundamentally dismantled the narrative long maintained by Big AI incumbents. For years, the story was simple: frontier AI requires billions in compute, massive proprietary datasets, and centralized Western-controlled infrastructure. That narrative justified high margins, vendor lock-in, and control over sovereign and enterprise procurement. DeepSeek has stripped that claim bare, proving high-performance models can be produced at a fraction of the cost, openly peer-reviewed, and widely deployed.

Big AI is already reacting. Governments across Europe, North America, the Middle East, and allied regions are facing intensified lobbying, targeted partnerships, and policy influence campaigns aimed at preserving non-sovereign dependency and rent capture. The playbook is consistent: embed models into government-funded projects, offer “preferred access,” and shape technical standards to maintain leverage, despite being technologically outpaced. Quantitative indicators already show rising engagement: EU procurement programs are actively piloting multiple open-architecture models, GCC nations are rapidly deploying sovereign AI initiatives, and multiple Global South countries are experimenting with locally hosted or open-source LLMs.

Recent UK examples illustrate this clearly. Microsoft’s £22 billion AI “investment” has been paraded as a national triumph, while in reality it entrenches permanent rent infrastructure. Data centers may sit in the UK, but upgrade levers, integration controls, and policy influence remain firmly in U.S. hands.

As I detailed in Britain’s Digital Sovereignty Giveaway: The Rent Extraction Trap, this mirrors the classic dynamics described by Fred Hirsch (Social Limits to Growth) and Anne Krueger (The Political Economy of the Rent-Seeking Society): stratified access determines reward, and resources consumed in rent-seeking are a net economic loss. In effect, the UK is paying to access its own digital future. Multi-year contracts, recurring upgrade fees, and embedded “preferred access” clauses mean sovereignty is compromised even while local infrastructure exists.

The broader pattern is evident across Europe and the Middle East: open models from China and regional initiatives are enabling governments and companies to bypass traditional Big AI gatekeepers. Within months, DeepSeek-R1’s open weights had been downloaded over 10.9 million times on Hugging Face, with sovereign deployments in China, the GCC, and the EU. The technology and methodology are now replicable globally, providing a tangible mechanism for breaking dependency cycles and eroding the influence of entrenched Western platforms.

This dynamic is accelerating as nations integrate AI into critical infrastructure, education, healthcare, and defense. Sovereign, open, cost-efficient alternatives now exist, yet entrenched Big AI actors continue to leverage regulatory, legal, and procurement channels to preserve influence. Countries that fail to diversify risk replicate historical dependency cycles, now updated for AI infrastructure. Metrics of AI adoption in open ecosystems, emerging EU multi-provider frameworks, and GCC-backed sovereign compute clusters show the center of gravity shifting away from closed, high-cost U.S.-centric systems.

The narrative battle has shifted: from technical superiority to strategic influence. Open models, transparent peer review, and cost efficiency no longer threaten only margins—they directly challenge Big AI’s legitimacy and control at government and policy levels. Peer-reviewed disclosures, public reviewer reports, and open technical documentation now serve as operational levers for boards, regulators, and sovereign procurement teams to reject non-sovereign lock-in. Governments and agencies can now evaluate models based on verifiable data, safety compliance, and efficiency, rather than vendor promises alone.

The next phase of this ecosystem shock is not just adoption, it is a global contest over sovereignty, regulatory authority, and control of AI infrastructure itself. Concrete battlegrounds are already visible: the EU seeking to expand beyond Mistral, the GCC positioning for rapid entry, and the Global South preparing to bypass dependency cycles altogether.

The emergence of a new standard, peer-reviewed, transparent, cost-efficient, and widely deployable models, means that Big AI’s historical rent-capture narratives are increasingly indefensible.

Countries that embrace these alternatives will gain both technical and strategic independence; those that do not risk reproducing cycles of dependency and economic leakage that DeepSeek’s breakthrough has now made plainly avoidable.


The Bottom Line: Proof, Not Promise - The Frontier Is Open to All.

DeepSeek-R1 proves that frontier AI is no longer the exclusive domain of billion-dollar Western labs. At a fraction of the cost (just $294,000 versus $100 million or more) this peer-reviewed, open model matches or exceeds GPT-4’s capabilities. The implications are immediate, systemic, and global:

  • The Cost Wall Is Broken: The era of “AI requires billions” is over. Any nation, enterprise, or consortium with modest compute can now compete at the frontier.
  • Big AI’s Narrative Shattered: The long-standing justification for vendor lock-in, inflated pricing, and centralized control has been exposed as a bluff. Rent-capture strategies and exclusive claims to frontier AI are no longer credible.
  • Sovereignty Becomes Operational: Governments, regulators, and enterprises can now demand peer-reviewed, transparent, and auditable models. Non-sovereign dependency is a strategic vulnerability, and the playbook for monopolistic influence used by the Sovereignty Cartel is disrupted.
  • Global Adoption Accelerates: Open models are spreading fast, with deployments across Asia, the Middle East, Europe, and beyond. Expect the EU, GCC, the Global South, and other regions to scale rapidly.
  • The New Benchmark: Proof, not promise, defines credibility. Peer review, transparency, and cost efficiency are the baseline for trust, procurement, and regulation. Any AI model failing these standards is a liability.

The DeepSeek moment is irreversible. The next model will be cheaper, open, and auditable. Frontier AI has been democratized. 

The age of hidden costs, opaque models, and monopolistic AI control is over, DeepSeek just handed the keys to the frontier to the world.

The question is no longer if the Big AI cartel will be disrupted, it is how quickly the rest of the world will capitalize on it.

Endnotes

DeepSeek R1 Training Cost ($294,000)

  1. CNN Business, "China's DeepSeek shook the tech world. Its developer just revealed the cost of training the AI model," September 19, 2025 - https://www.cnn.com/2025/09/19/business/deepseek-ai-training-cost-china-intl 
  2. Nature, "Secrets of DeepSeek AI model revealed in landmark paper," September 2025 - https://www.nature.com/articles/d41586-025-03015-6 

Nvidia Market Cap Loss ($589 Billion)

  1. Yahoo Finance, "Nvidia stock plummets, loses record $589 billion as DeepSeek prompts questions over AI spending," January 27, 2025 - https://finance.yahoo.com/news/nvidia-stock-plummets-loses-record-589-billion-as-deepseek-prompts-questions-over-ai-spending-135105824.html 
  2. Bloomberg, "Nvidia's $589 Billion DeepSeek Rout Is Largest in Market History," January 27, 2025 - https://www.bloomberg.com/news/articles/2025-01-27/asml-sinks-as-china-ai-startup-triggers-panic-in-tech-stocks 

DeepSeek R1 First Peer-Reviewed Major LLM

  1. Nature Journal, "DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning," September 2025 - https://www.nature.com/articles/s41586-025-09422-z 
  2. Xinhua, "DeepSeek's R1 sets benchmark as first peer-reviewed major AI LLM," September 18, 2025 - https://english.news.cn/20250918/9375e7d23dca4163be9cd05da9da6e0b/c.html 

Tech Companies $320 Billion AI Spending

  1. CNBC, "Tech megacaps plan to spend more than $300 billion in 2025 as AI race intensifies," February 8, 2025 - https://www.cnbc.com/2025/02/08/tech-megacaps-to-spend-more-than-300-billion-in-2025-to-win-in-ai.html 
  2. TheWrap, "Meta, Google, Amazon & Microsoft to Spend a Combined $320 Billion on AI in 2025," February 7, 2025 - https://www.thewrap.com/meta-google-microsoft-amazon-spend-big-on-ai-2025/ 

OpenAI Projected Cash Burn

  1. CNBC, "OpenAI expects business to burn $115 billion through 2029, The Information reports," September 6, 2025 - https://www.cnbc.com/2025/09/06/openai-business-to-burn-115-billion-through-2029-the-information.html 
  2. The Decoder, "OpenAI has reportedly misjudged its cash burn by $80 billion," September 2025 - https://the-decoder.com/openai-has-reportedly-misjudged-its-cash-burn-by-80-billion/ 

Chinese Models Downloads & Regional Data

  1. Interconnects.ai, "On China's open source AI trajectory," by Nathan Lambert, September 2025 - https://www.interconnects.ai/p/on-chinas-open-source-ai-trajectory 

DeepSeek Downloads on Hugging Face

  1. Nature, "Secrets of DeepSeek AI model revealed in landmark paper," September 2025 - https://www.nature.com/articles/d41586-025-03015-6 


#AI #ArtificialIntelligence #TechGeopolitics #AIInfrastructure #GlobalAI #AISovereignty #DigitalSovereignty #transparency #AITransparency #TechProcurement #Innovation #Leadership #DeepSeek #OpenAI #BigAI #BigTech


About the Author

About the Author Dion Wiggins is Chief Technology Officer and co-founder of Omniscien Technologies, where he leads the development of Language Studio—a secure, regionally hosted AI platform for digital sovereignty. It powers translation, generative AI, and media workflows for governments and enterprises needing data control and computational autonomy. The platform is trusted by public sector institutions worldwide.

A pioneer of Asia’s Internet economy, Dion founded Asia Online, one of the region’s first ISPs in the early 1990's, and has since advised over 100 multinational firms, including LVMH, Intuit, Microsoft, Oracle, SAP, IBM, and Cisco.

With 30+ years at the crossroads of technology, geopolitics, and infrastructure, Dion is a global expert on AI governance, cybersecurity, and cross-border data policy. He coined the term “Great Firewall of China”, and contributed to national ICT strategies—including China’s 11th Five-Year Plan.

He has advised governments and ministries across Asia, the Middle East, and Europe, shaping national tech agendas at the ministerial and intergovernmental level.

As Vice President and Research Director at Gartner, Dion led global research on outsourcing, cybersecurity, open-source, localization, and e-government, influencing top-level public and private sector strategies.

He received the Chairman’s Commendation Award from Bill Gates for software innovation and holds the U.S. O-1 Visa for Extraordinary Ability—awarded to the top 5% in their field globally.

A frequent keynote speaker and trusted advisor, Dion has delivered insights at over 1,000 global forums, including UN summits, Gartner Symposium/Xpo, and government briefings. His work has been cited in The Economist, Wall Street Journal, CNN, Bloomberg, BBC, and over 100,000 media reports.

At the core of his mission:

"The future will not be open by default—it will be sovereign by design, or not at all."



- How could we enflrce those tràsparency standards worlwide : As we did for #Bluetooth, for exemple ?

  • No alternative text description for this image
Like
Reply
📶 Hermann Djoumessi, MA

Social Media Mngr. | Data Analyst 📈 BIG DATA (Data Viz. / E.T.L) 📊 D.P.O. (CNIL) | A.i. | GenA.i 💻 | Coach | VIBE-Coding| 🔒 BLOCKCHAIN-WEB3.0. 💸 | SEO + SMO: Google Analytics cert. | Tableau | EXCEL l #DEFi 🚀|

1mo

#impressive, isn’t it ? - Ai safety is officialy a thing ? 💥

Pavlo Hak, MCS

Founder, OKSIM.UA | Computer System Analyst | Cybersecurity Specialist

1mo

The announcement from DeepSeek certainly drew attention, but it’s important to look deeper. While they claim unprecedented transparency, reports like this one from The Register  show that the “$294K training cost” is far from the full picture. Factors such as infrastructure, data preparation, and hidden subsidies are not clearly presented. What looks like radical openness can in fact be selective disclosure, shaping the narrative for strategic reasons. True transparency isn’t just about publishing numbers — it’s about providing verifiable, reproducible methodology across the entire lifecycle. Otherwise, we risk trading one myth for another.

To view or add a comment, sign in

More articles by Dion Wiggins

Explore content categories