Newsroom


A magnifying glass on a graph

Description automatically generated

STATISTICS & DATA SCIENCE NEWSROOM

Hosted by Chong Ho (Alex) Yu, SCASA President (2025-2026 term)
A person with dark hair wearing a striped shirt

Description automatically generated

Posted on December 7, 2025

Recently Anthropic’s research team examines how AI—specifically Claude and its coding assistant Claude Code—is reshaping day-to-day engineering work inside the company. The study draws on surveys from 132 engineers and researchers, 53 interviews, and internal tool-usage data. Together, the findings show a workplace undergoing rapid transformation. Engineers now use Claude for roughly 60% of their work, and most report about a 50% boost in productivity. Beyond speed, AI is expanding the scope of what gets done: around a quarter of AI-assisted work involves tasks that previously would have been ignored or deprioritized, such as building internal dashboards, drafting exploratory tools, or cleaning up neglected code. Claude also makes engineers more “full-stack,” enabling them to work across languages, frameworks, and domains they might not normally touch. Small, tedious jobs—bug fixes, refactoring, documentation—are now far easier to complete, and this reduces project friction.

The transformation is not without costs. Engineers increasingly rely on AI for routine coding, which raises concerns about eroding foundational skills, especially the deep knowledge needed to evaluate or critique AI-generated code. Even though AI assists heavily, fully delegating high-stakes work remains rare; many engineers only hand off 0–20% of such tasks because they still want control when correctness matters. Interviews also reveal a cultural shift: some developers feel coding is becoming less of a craft and more of a means to an end, which creates mild identity friction. Collaboration patterns are also changing. Because people now ask Claude first, junior engineers reach out to colleagues less, and spontaneous mentorship moments have declined. This makes learning trajectories murkier, as traditional peer-to-peer knowledge transfer is no longer guaranteed. Finally, there is uncertainty about long-term roles. Some worry that AI progress may reduce the need for certain types of engineering labor, while others see emerging opportunities in higher-level oversight, orchestration, and AI-augmented project design.

That’s my take on it: Anthropic’s research shows that many developers feel coding is shifting from an artisanal craft to a pragmatic means of accomplishing a task. To me, this simply confirms what I have been saying all along. There is nothing wrong with making things easier; in fact, the entire history of computing is a long march toward reducing friction. We moved from machine language—raw binary streams of 0s and 1s that only a CPU could love—to assembly language, where mnemonics like ADD, MOV, and JMP gave us a slightly more humane way to speak to the machine. Then came high-level programming languages, finally letting humans express intent in something closer to ordinary language, even though for many people the error messages still read like Martian poetry. With the arrival of graphical user interfaces, everyday users no longer needed to think in syntax at all. Drag-and-drop and point-and-click replaced pages of code.

In that sense, the surge of coding culture over the last decade was actually the historical anomaly—a moment when society briefly celebrated the ability to “speak machine” before tools evolved to make that fluency less necessary. Generative AI is now returning us to the broader technological trend: lowering barriers, abstracting away complexity, and letting people focus on the real goals rather than the mechanics. As a data science professor, I’ve always told my students that the point is not to engage in hand-to-hand combat with syntax; the point is to extract insight, solve problems, and make decisions that matter. If AI can remove the drudgery and let us operate at the level of reasoning rather than rote implementation, then we are simply continuing the same trajectory that gave us assembly, high-level languages, and the GUI. In other words, AI is not the end of programming—it’s the next chapter in making computing more human.

Link: https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic

Posted on December 6, 2025

The integration of science and philosophy delivers a sobering message about our everyday intuitions. Common-sense perception evolved to help us dodge predators and find food, not to penetrate the deep structure of reality. It tells us that objects are solid, that time flows uniformly, and that causes always precede effects in a simple, linear way. Physics tells a stranger story: spacetime can curve, stretch, and even form horizons; quantum systems can be entangled across vast distances; information can be bounded by area rather than volume; erasing a bit in a memory chip heats up the environment. Against this backdrop, it is risky for philosophers—or anyone—to dismiss ideas simply because they are counter-intuitive, offend “what seems obvious,” or violate basic logics, while leaving out the constraints of modern science and mathematics.

Posted on December 3, 2025

Recently OpenAI CEO Sam Altman has declared a company-wide "code red," signaling an urgent and critical effort to retain the company's competitive lead in the rapidly evolving AI landscape. The move is a direct response to the increasing pressure from major rivals, particularly Google with its Gemini models and Anthropic with its Claude offerings, which are reportedly closing the performance gap or even surpassing OpenAI's existing models in certain benchmarks. The "code red" mandates that OpenAI employees shift all resources to prioritize improving the core ChatGPT experience, focusing on making the chatbot faster, more reliable, and better at personalization to maintain its substantial user base. Consequently, OpenAI is pausing work on other monetization and experimental projects, including its planned ad-based strategy, shopping features, AI agents, and the personal assistant "Pulse." This intense focus comes as the company faces massive financial burn rates and trillion-dollar infrastructure commitments, meaning sustaining its dominant market position and high valuation is now a matter of existential importance as rivals like Google and Anthropic continue to gain ground.

 

That’s my take on it:

The internal "Code Red" declared by OpenAI is a justifiable response to an increasingly intense competitive environment, as the threats posed by rivals are supported by objective performance data. Current benchmarks indicate that ChatGPT's performance is demonstrably lagging in several key frontier areas:

  • Multimodal Excellence: Google is establishing leadership in generative media. Its Nano Banana model is widely considered the leading AI image generator, with its quality prompting adoption by industry giants like Adobe and HeyGen. Further, side-by-side comparisons by technical reviewers show that Google’s video generator, Veo, consistently outperforms competitors like Sora, WAN, and Runway.
  • Coding Superiority: For software engineering tasks, Anthropic’s Claude Opus 4.5 claims the top spot for accuracy, achieving success rates around 80.9% (according to Composio), which exceeds OpenAI's specialized coding model, GPT-5.1 Codex (77.9%).
  • Advanced Reasoning: In complex cognitive tasks, Google’s Gemini 3 Pro demonstrates a significant edge on ultra-hard reasoning tests (e.g., GPQA Diamond), with performance described as "PhD-level" on key frontier benchmarks (Marcon).

While rivals lead in performance benchmarks, ChatGPT still maintains a commanding lead in consumer reach. As of late 2025, ChatGPT boasts over 800 million weekly active users, significantly outnumbering Google Gemini (estimated at 650 million) and Anthropic’s Claude (estimated at 30 million). However, Gemini is rapidly closing this gap, and Claude remains a dominant force in high-value enterprise and developer markets.

Given this robust user base and the company's clear focus under the "Code Red," it is unlikely that ChatGPT will follow the decline of past tech leaders like Novell NetWare or WordPerfect. Instead, this intense and well-evidenced competition is expected to spur rapid innovation from OpenAI, ultimately resulting in better and more powerful tools for end-users.

 

Links: https://www.cnbc.com/2025/12/02/open-ai-code-red-google-anthropic.html

https://macaron.im/blog/claude-opus-4-5-vs-chatgpt-5-1-vs-gemini-3-pro

https://composio.dev/blog/claude-4-5-opus-vs-gemini-3-pro-vs-gpt-5-codex-max-the-sota-coding-model

Posted on November 26, 2025

On November 24, 2025, President Trump signed an executive order launching the Genesis Mission, a sweeping federal effort to harness artificial intelligence for scientific research and innovation.  The initiative tasks the United States Department of Energy (DOE) with building a unified “American Science and Security Platform” — combining DOE national-lab supercomputers, secure cloud environments, and federal scientific data sets, making them accessible to researchers, universities, and private-sector collaborators.

The goal is to accelerate breakthroughs across major domains such as advanced manufacturing, biotechnology, critical materials, nuclear (fission and fusion), quantum information science, and semiconductors. By enabling AI-driven modeling, simulations, automated experimentation, and large-scale data analysis, Genesis Mission aspires to shorten research timelines, strengthen national security, boost energy development, and enhance overall scientific productivity.

Officials liken the scale and ambition of the program to earlier landmark federal science mobilizations — describing it as a generational effort to maintain U.S. technology leadership. At the same time, some observers raise concerns: using massive AI and computing resources demands huge energy, raising environmental and sustainability issues, especially given rising electricity usage by data centers globally.

In short, Genesis Mission aims to centralize federal scientific data + computing power under a unified AI-ready infrastructure; leveraging AI not just for narrow tasks but to systematically accelerate scientific discovery — though it comes wrapped with trade-offs around energy, governance, and security.

An article on Nature raises significant caveats and risks. One concern is about how “access” will be managed: even though the plan promises broader availability, it remains unclear who will actually benefit — big labs, elite universities, or well-funded private companies — and whether smaller institutions or independent researchers will get meaningful access.

Another worry is about oversight and governance: when the government becomes both steward of data/computing infrastructure and a participant in scientific output, issues of transparency, fairness, and potential concentration of power become more pressing.

 

That is my take on it:

The U.S. does seem to be adopting a more centralized, mission-driven strategy similar to Japan’s Fifth Generation Computer Systems project in the 1980s or China’s more recent state-steered AI initiatives. But the historical analogy has limits. Japan’s fifth-generation effort was built on a speculative bet about logic programming and parallel inference machines, which ultimately failed because the chosen paradigm didn’t scale and the commercial sector moved in other directions. What makes the present moment different is that today’s frontier technologies — AI, quantum computing, advanced cloud-supercomputing — are no longer speculative. They are proven, economically entrenched, and strategically unavoidable. Modern AI systems already show transformative impact across science, national security, and industry, and quantum/semiconductors are recognized as critical chokepoints in global power competition. Because these technologies require staggering capital, compute infrastructure, and coordination, the private sector alone cannot build or integrate them at national scale. In that sense, government leadership is not about “picking winners” prematurely, as Japan did, but about building public-goods infrastructure: shared compute, standardized data platforms, talent pipelines, and national-lab capabilities that accelerate innovation across universities and industry. The risk of misallocation still exists — large state-led projects can drift or become politically shaped — but given the maturity and strategic clarity of these technologies, public investment today is closer to funding railroads or the Apollo program than chasing an untested paradigm. So overall, your view makes sense: this round of intervention looks more justified, more grounded in established trends, and more aligned with long-term scientific and geopolitical realities.

 

Link: https://www.nbcnews.com/tech/tech-news/trump-signs-executive-order-launching-genesis-mission-ai-project-rcna245600

https://www.nature.com/articles/d41586-025-03890-z

 

Posted on November 26, 2025

Meta is reportedly in advanced discussions with Google about a multibillion-dollar deal that would bring Google’s Tensor Processing Units (TPUs) into Meta’s own data centers beginning around 2027. As part of the transition, Meta may first rent TPU capacity through Google Cloud next year before moving toward on-premises TPU deployment. The news immediately affected the market: Nvidia’s shares fell roughly 4% after the report surfaced, reflecting investor concern that a major AI-compute buyer might shift part of its workload away from Nvidia’s dominant GPU ecosystem. At the same time, Alphabet’s stock rose as investors anticipated the possibility of Google gaining a larger share of the AI-hardware market.

That’s my take on it:

Long-term, this development suggests that the AI-hardware landscape may be entering a more competitive and less GPU-centric era. If large hyperscalers like Meta diversify beyond Nvidia, it reduces vendor lock-in and could push the industry toward a mix of GPUs, TPUs, and other accelerators. Such diversification would also place pressure on pricing and innovation: Nvidia’s strength comes not only from hardware performance but from CUDA, the software ecosystem that has locked in years of developer expertise. For TPUs or other ASIC-based accelerators to gain broader traction, their surrounding software stacks—compilers, runtime systems, optimization libraries, developer tools—must continue to mature. If they do, Nvidia’s moat could narrow significantly. In addition, hyperscalers increasingly prefer to control more of their compute destiny, which may accelerate the trend toward custom silicon or alternative architectures.

No technology king reigns forever. Novell NetWare once dominated network operating systems until it was displaced by Windows NT. UNIX workstations and powerhouse vendors like Sun Microsystems and SGI defined high-end computing until the market shifted toward other architectures, leading to their decline. Compaq was once the best-selling PC brand before it eventually faded into acquisition and obsolescence. These precedents show that technological leadership is always contingent, vulnerable to architectural transitions, ecosystem shifts, and strategic pivots by major buyers. Nvidia remains the leader today, but the possibility that Google—or another contender—could overtake it is entirely plausible, especially if hyperscalers begin migrating workloads to alternative accelerators at scale.

Links: https://finance.yahoo.com/news/meta-google-discuss-tpu-deal-233823637.html

https://nypost.com/2025/11/25/business/nvidia-shares-sink-4-after-report-of-meta-in-talks-to-spend-billions-on-google-chips/

Posted on November 19, 2025

What happens when intelligence no longer resides solely in distant hyperscale data centers but instead becomes embedded directly within the physical world? What new possibilities emerge when AI can think, react, and learn on vehicles, robots, medical devices, smart grids, and factory equipment—without relying on the cloud for every decision?
These questions underpin a profound shift in how modern computing systems are architected. Edge, fog, and cloud computing, once presented as alternatives, now form a unified continuum that distributes computational tasks based on latency sensitivity, contextual relevance, and the scale of data processing required. Rather than competing with the cloud, edge and fog computing extend its capabilities outward, enabling intelligent systems that are not only powerful but also immediate, context-aware, and resilient.

Posted on November 19, 2025

Yesterday (Nov 18, 2025) Google announced Gemini 3, presenting it as its most capable AI model to date, with major leaps in reasoning, multimodality, and long-context performance. The test result is 1501 Elo on LMArena, the highest public rating for structured reasoning among all LLMs.

The model delivers substantially stronger results across advanced benchmarks—such as GPQA, MathArena, Video-MMMU, and WebDev coding tests—showing clear gains in scientific reasoning, complex mathematics, coding accuracy, and cross-modal understanding. Gemini 3 can process up to 1 million tokens, handle text, images, video, code, and handwritten materials natively, and generate deeply structured responses, explanations, visualizations, and plans. Google also introduced a “Deep Think” mode that pushes the model’s analytical depth even further for long-horizon reasoning and complex problem-solving. In practice, Gemini 3 supports three main use cases: helping users learn difficult material through multimodal comprehension, powering interactive and agentic coding workflows for developers, and enabling more reliable long-step planning with tool usage. The model is already integrated into Search’s AI mode, the Gemini app, and Google’s development platforms, marking the company’s strongest push yet toward a unified, high-performance generative AI system.

 

That’s my take on it:

To compare Gemini 3 and ChatGPT 5.1 under similar conditions, I used the thinking mode for both models and posed the following challenging question: “The three core domains of physics—General Relativity (gravity/spacetime), Quantum Mechanics (subatomic world), and Thermodynamics (entropy/heat)—all share a deep, fundamental relationship with Information. Develop an integrated academic argument that synthesizes the concepts of Information Theory, Physics, and Philosophy to address the following question: Is Information a fundamental, irreducible property of the Universe (a "substance" or "thing-in-itself"), or is it an emergent property arising solely from the arrangements and states of physical matter/energy?”

 

ChatGPT took longer to generate its response, but its answer was more elaborated than Gemini’s. Gemini’s final verdict is: “The weight of modern evidence supports the conclusion that Information is the irreducible thing-in-itself. Matter, energy, and even spacetime are likely phase transitions or emergent representations of a deeper, underlying informational code. We do not live in a universe of matter that stores information; we live in a universe of information that manifests as matter."

ChatGPT’s conclusion is: “Information is a fundamental, irreducible aspect of the universe’s ontology—but best understood as structure rather than substance. Matter/energy, spacetime, thermodynamic entropy, and conscious experience are all different faces of that same underlying informational reality.” Substantively, both models affirm the same hierarchy—that information is more fundamental than matter—while differing mainly in rhetorical framing. Nonetheless, this single comparison does not establish that either model is categorically superior; a more considered verdict will require further, systematic testing.

 

Link: https://blog.google/products/gemini/gemini-3/#note-from-ceo

Posted on November 17, 2025

Sakana AI has officially become Japan’s most valuable unlisted startup after completing a major funding round that boosted its valuation to about 400 billion yen ($2.63 billion). The company raised roughly 20 billion yen from a mix of domestic and international backers, including MUFG, U.S. venture capital firms, Santander Group, and Shikoku Electric Power.

Sakana AI specializes in large language models tailored to Japanese language and culture, which have attracted major financial partners such as MUFG and Daiwa Securities—both previously committing up to billions of yen for finance-focused AI systems. Looking forward, the company plans to expand into defense and manufacturing, and it projects becoming profitable next year. Founded in 2023 by former Google researcher David Ha, the startup is known for its efficient, multi-model LLM architecture and a recent breakthrough enabling rapid self-improvement in its systems.

Globally, investor enthusiasm for AI remains high, with OpenAI valued around $500 billion, Anthropic at $183 billion, and France’s Mistral AI at 11.7 billion euros after its latest round. While U.S. giants pursue massive general-purpose intelligence, companies like Sakana AI and Mistral focus on specialized or regionally adapted models, aligning with the growing push for “sovereign AI” as countries seek technological autonomy amid geopolitical tensions.

In Japan, Sakana AI now surpasses Preferred Networks, which previously held the top valuation but has declined to around 160 billion yen after recent funding adjustments.

That’s my take on it:

For a long time, people have criticized mainstream LLMs for cultural bias and for being overly shaped by U.S. data, norms, and perspectives. Instead of endlessly pointing fingers at American AI companies, Japan has taken a more constructive path by developing its own domestically grounded LLM. This is a smart strategic move—one that lets Japan build models that better understand its linguistic subtleties, cultural context, and industrial needs.

However, very few countries possess the deep technical expertise, data infrastructure, and financial resources required to build their own large-scale language models. As a result, despite global interest in “sovereign AI,” the landscape will likely remain concentrated among a small group of technologically advanced nations—such as the United States, China, Japan, and France. In the end, LLM development may continue to be shaped by a handful of major players with the capacity to compete at this scale.

While most nations cannot realistically build their own LLMs, they can still play an active role in shaping how these models understand their languages and cultures. One practical pathway is collaboration: governments, research institutions, and cultural organizations can partner with major AI developers to contribute representative datasets, linguistic corpora, and culturally grounded knowledge. This approach allows countries to maintain some degree of cultural sovereignty without bearing the massive cost of full-scale model development. In many cases, co-creation with established AI companies may be the most feasible way for smaller nations to ensure that their histories, values, and perspectives are reflected accurately within global AI systems.

Link: https://asia.nikkei.com/business/technology/artificial-intelligence/sakana-ai-takes-crown-as-japan-s-most-valuable-unicorn

Posted on November 15, 2025

Yann LeCun, a celebrated deep-learning pioneer (2018 Turing Award laureate) and longtime chief AI scientist at Meta, is reportedly preparing to leave the company in the coming months to found his own startup. According to sources cited by the Financial Times, he is already in early fundraising talks for the new venture. The startup will reportedly focus on developing “world models” — AI systems capable of understanding the physical world through video and spatial data, rather than relying primarily on large language-model (LLM) text systems.

This signals a divergence from the path Meta has been increasingly pursuing, which centers on deploying generative models and rapidly bringing AI-powered products to market. LeCun’s exit comes amid a major strategic shift at Meta. The company recently created a new AI unit, Meta Superintelligence Labs, led by Alexandr Wang (ex-Scale AI), and Meta has invested heavily (billions) in restructuring and recruiting for AI. Within this reorganization, LeCun’s traditional research unit, Facebook Artificial Intelligence Research (FAIR) (now part of Meta’s AI research structure), appears to have been somewhat deprioritized in favor of faster-paced product-oriented work.

For Meta, losing a figure of LeCun’s stature underscores growing tensions between foundational, long-horizon AI research and the push for quick product rollout and competitive productization in the AI arms race. The move raises questions about whether the company’s new direction may compromise longer-term research innovation. LeCun himself has been publicly skeptical of large language-model approaches as sufficient for human-level reasoning and instead has argued for architectures that incorporate physics, perception and world modelling.

This is my take on it:

At this stage of his career, Yann LeCun may actually benefit from stepping outside Meta’s orbit. Since his landmark work on convolutional neural networks (CNN) (applying the backpropagation algorithm to train CNNs), he hasn’t produced another breakthrough on that same scale, while Meta’s flagship model, LLaMA, continues to lag behind fast-advancing rivals like ChatGPT and Gemini. In that sense, his departure could serve both sides well. Meta can fully commit to its new product-driven AI roadmap, and LeCun can finally pursue the long-term research vision—especially world models—that never quite fit Meta’s increasingly commercial structure.

The situation echoes an earlier chapter in tech history. When Steve Jobs left Apple, it initially looked like a setback, but the distance allowed him to experiment, rebuild, and ultimately transform not only himself but the company he eventually returned to. LeCun may be entering a similar kind of creative detachment. Free from the organizational constraints, time pressures, and internal priorities of a trillion-dollar platform, he might discover the conceptual space needed for a genuine leap—perhaps the kind of architectural breakthrough he has been arguing for in world-model-based AI. Rather than a retreat, this transition could mark the beginning of his most innovative phase in years.

Link: https://arstechnica.com/ai/2025/11/metas-star-ai-scientist-yann-lecun-plans-to-leave-for-own-startup/

Posted on November 13, 2025

Who first coined the term that defines our century’s greatest technological ambition—Artificial General Intelligence? We celebrate OpenAI, DeepMind, and Anthropic, but the phrase itself was not born in Silicon Valley. It came from a physicist at the margins of computer science—Mark Gubrud—whose goal was not to accelerate machine cognition, but to warn humanity about its potential perils. Why then is the man who coined the term AGI almost invisible in the history of AI?

Link: https://www.youtube.com/watch?v=hOjJXCy3tsE

Posted on November 12, 2025

The Queen Elizabeth Prize for Engineering (QEPrize) panel, held at the Financial Times Future of AI Summit in London in early November 2025, brought together six of the world’s most prominent AI visionaries—Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Fei-Fei Li, Jensen Huang, and Bill Dally—to discuss the trajectory of artificial intelligence and its societal implications. The conversation centered on whether AI will ever reach or surpass human intelligence, and what such a milestone would mean for humanity. Hinton speculated that machines capable of outperforming humans in complex reasoning and debate might emerge within two decades, while Bengio argued that progress will occur gradually in waves rather than through a single “singularity” moment. In contrast, LeCun cautioned that the field remains far from human-level cognition, particularly in domains requiring physical reasoning and common-sense understanding.

Fei-Fei Li emphasized that while AI already exceeds human perception in narrow tasks such as image recognition, it still lacks the holistic intelligence that arises from embodied experience, social awareness, and ethics. Huang reframed the debate by suggesting that asking when AI will match humans is less relevant than how humans can harness its growing capabilities for creative and productive purposes. Dally reinforced this human-centric view, stressing that AI should be designed to augment rather than replace human labor, amplifying both productivity and discovery. Together, they agreed that future breakthroughs depend not only on algorithmic innovation but also on massive compute infrastructure, efficient energy use, and responsible data management.

Beyond the technical dimension, the panel reflected a rare consensus that ethical alignment and societal adaptation must progress alongside hardware and model scaling. The speakers urged policymakers and educators to prepare for shifts in employment, governance, and creativity brought by generative and autonomous systems. Collectively, the QEPrize laureates conveyed optimism tempered by responsibility: AI, like past industrial revolutions, holds enormous promise if humanity remains intentional about guiding its evolution toward social good.

That’s my take on it:

I share the panel’s belief that AI is not a replacement for humans but an extension of our capabilities. By automating repetitive and mundane tasks, it liberates us to focus on deeper thinking, creativity, and problem-solving. Yet this view assumes an optimistic vision of human nature—one that may not always hold true. History shows that when technology eases our physical burdens, such as through vehicles and modern machines, it can also lead to unintended consequences like inactivity, obesity, and related health issues. To compensate, we invented gyms and fitness movements to rebuild what convenience had eroded. AI could exert a similar influence on our cognitive well-being: as it takes over mental labor, it may subtly invite intellectual complacency. Therefore, society might need to create its own “mental gyms,” encouraging people to periodically engage in thinking, writing, or problem-solving without AI assistance. Ultimately, echoing the panel’s sentiment, the key lies in responsible design and use—ensuring that AI strengthens rather than weakens the human spirit, guiding innovation toward the collective good.

Link: https://www.youtube.com/watch?v=0zXSrsKlm5A

Posted on November 10, 2025

According to internal documents reviewed by Meta Platforms, the company projected that for 2024 roughly 10% of its total revenue — about US $16 billion — would come from ads tied to scams or banned goods. The documents also reveal that Meta estimated its platforms served users about 15 billion “higher-risk” scam ads per day. While many of these ads triggered internal flags (via automated systems), Meta’s threshold for outright banning an advertiser required a very high likelihood of wrongdoing (at least 95% certainty).

Advertisers flagged as likely scammers but not banned were instead charged higher ad rates—what Meta calls “penalty bids”—so the company still collected revenue while aiming to discourage the ads. The documents show Meta acknowledged that its platforms are a major vector for online fraud: one presentation estimated Meta’s services were involved in about a third of all successful U.S. scams. They also note that in an internal review, Meta concluded “It is easier to advertise scams on Meta platforms than Google.”

Regulators are taking notice: the U.S. Securities and Exchange Commission is investigating Meta over financial-scam ads, and the UK regulator found in 2023 that Meta’s products were responsible for 54% of payments-related scam losses—more than any other social-media platform.  Meta’s internal documents show it anticipates regulatory fines of up to US $1 billion, but still report that income from scam-linked ads dwarfs such potential penalties.

Strategically, Meta appears to have adopted a “moderate” approach to enforcement: instead of a full crackdown, it prioritized markets with higher regulatory risk, and set internal guardrails such that ad-safety vetting actions in early 2025 were limited to avoid revenue losses larger than about 0.15% of total revenue.

The company’s aim is to reduce the percentage of revenue from scam/illegal-goods ads from the estimated 10.1% in 2024 to 7.3% by end-2025, further down to about 6% by 2026 and 5.8% by 2027. In response, Meta spokesman Andy Stone said the documents present a “selective view” and that the 10.1% figure was “rough and overly-inclusive” because it included many legitimate ads. He stated Meta has reduced user reports of scam ads by 58% globally over 18 months and removed over 134 million pieces of scam-ad content so far in 2025.

That’s my take on it:

While Meta’s internal goal of lowering scam and illegal-goods ad revenue from about 10% in 2024 to 5.8% by 2027 may look like progress, the numbers are still unacceptably high for a platform of its scale and technical sophistication. With billions of daily ad impressions and some of the world’s most advanced AI tools at its disposal, Meta clearly could have done more to identify, remove, and deter fraudulent advertisers. The company’s cautious enforcement threshold—requiring roughly 95% certainty before banning an advertiser—reflects a prioritization of revenue stability over user protection. Reducing the proportion to 1–2% should be achievable if Meta were willing to recalibrate its incentives, invest more deeply in verification infrastructure, and accept short-term financial trade-offs for long-term trust.

At the same time, it is important to recognize that this issue extends beyond Meta itself. Fraudulent content thrives on users’ willingness to click, share, and believe. Even the most sophisticated moderation systems cannot compensate for a public that is ill-equipped to detect deception. Therefore, digital literacy must become part of the broader solution—educating users to question sources, verify claims, and recognize the telltale signs of scams. Only when both the platform and the public act responsibly can the online ecosystem begin to suppress the flood of misinformation and fraudulent advertising that erodes trust in digital media.

Link: https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/

Posted on November 7, 2025

Google Cloud is announcing two major hardware innovations aimed at advancing AI workflows: the seventh-generation TPU named Ironwood TPU, and a new line of Arm-based general-purpose compute instances (the Axion CPU family) for workloads beyond pure acceleration.

Ironwood is engineered to support both large-scale model training and high-volume, low-latency inference. According to Google, it offers approximately 10× peak performance compared to their TPU v5p generation, and over 4× improved performance per chip for both training and inference relative to their TPU v6e (“Trillium”).

It is designed for huge scale: a “superpod” configuration supports up to 9,216 chips connected via a ~9.6 Tb/s inter-chip interconnect, with 1.77 PB of shared high-bandwidth memory for the entire pod, enabling massive model-size and dataset workloads.

Furthermore, the system uses optical circuit switching (OCS) and is integrated into Google’s “AI Hypercomputer” architecture, which spans hardware, networking, storage, and software co-design.

Google mentions early customer use-cases: for instance, Anthropic expects to access up to one million TPUs, and other firms are already using Ironwood to support inference-scale generative AI workloads.

In parallel, Google is doubling down on efficient, general-purpose compute (not just accelerators) via its Axion CPU line, based on Arm Neoverse architecture. This is aimed at workloads that feed and support AI — data-prep, micro-services, containers, analytics, web serving, etc.

Customers already report significant improvements: e.g., one case saw ~30 % performance improvement for video-transcoding vs comparable x86 VMs, another reported ~60 % better price-performance for data-pipeline and container workloads.

That’s my take on it:

Modern AI infrastructure is not just about bigger accelerators, but about the entire system — specialized silicon and efficient general-purpose CPUs, integrated with high-performance networking and memory. The combination of Ironwood (for model training & serving) and Axion (for the compute surrounding AI applications) gives organizations more flexibility and efficiency across the lifecycle of AI. This signals a continued trend: hardware-software co-design, large-scale parallel compute for training, and shifting focus toward inference and agentic workflows.  However, it is highly unlikely that Ironwood will be fully available for free use in Colab. Google will likely prioritize enterprise/customers via Google Cloud first.

Link: https://cloud.google.com/blog/products/compute/ironwood-tpus-and-new-axion-based-vms-for-your-ai-workloads

Posted on November 6, 2025

In deep learning, tensors represent everything from input data (images, sound waves, text tokens) to the learned parameters that define a neural network’s knowledge. The computations that update these tensors—multiplying and summing enormous arrays of numbers—are extraordinarily intensive. Standard CPUs, optimized for sequential tasks, quickly hit their limits Even powerful GPUs, designed for parallel graphics rendering, can struggle with the scale and precision required for modern large-language models (LLMs). This computational bottleneck led Google to design its own specialized hardware: the Tensor Processing Unit (TPU).

Link: https://www.youtube.com/watch?v=OalzyQj3B68

Posted on November 5, 2025

Oracle, once considered an underdog in cloud computing, has leveraged disciplined infrastructure expansion and strategic AI partnerships to stage one of the most dramatic turnarounds in modern tech history.  Oracle’s AI cloud strategy stands apart from the three hyperscale giants—AWS, Microsoft Azure, and Google Cloud—in both execution and positioning.

Posted on November 5, 2025

For decades, digital security has rested on the shoulders of mathematics. Every password, financial transaction, and confidential cloud file is protected by encryption schemes so complex that even the fastest classical supercomputers would need millions of years to crack them. But quantum computing—once a thought experiment of physics—has now moved from theory to laboratory demonstration.

Posted on November 4, 2025

Disaster Recovery (DR) and Business Continuity (BC) are two distinct but interconnected concepts that form the backbone of organizational resilience. Business Continuity is the overarching strategy focused on ensuring that a business can continue to operate during and after a disaster, addressing a wide range of potential disruptions from natural disasters to cyberattacks.

Posted on November 4, 2025

In the modern enterprise, data security is tightly bound to regulatory compliance. The legal landscape resembles a quilt stitched together from different colors, textures, and jurisdictions—each patch representing a law, framework, or directive that must somehow fit into the same pattern. Organizations must constantly navigate this mosaic of rules, hoping not to trip over any loose threads.

Posted on November 3, 2025

There has been an alarming escalation in the frequency and severity of cloud security incidents. The scale of these attacks has been unprecedented. The 2024 ransomware attack on Change Healthcare affected at least 100 million people, demonstrating the immense impact cyber threats can have on critical infrastructure. Similarly, a brute force attack on Dell’s systems in May 2024 exposed 49 million records, while a 2023 misconfiguration at Toyota led to the exposure of 260,000 customers' data.
A detailed analysis of these incidents reveals a clear pattern in attacker motivations and methods. Malicious actors are focusing their efforts on three key areas: SaaS applications, cloud storage, and cloud management infrastructure. The most prevalent breach type is phishing. This points to a critical underlying vulnerability: the human element.

Posted on October 30, 2025

On Tuesday (10/28) Nvidia CEO Jensen Huang announced at the GTC conference in Washington that the company's fastest AI chips, the Blackwell GPUs, are now in full production in Arizona, marking a shift from their previous exclusive manufacturing in Taiwan. This move fulfills a request from President Donald Trump to bring manufacturing back to the U.S. for reasons of national security and job creation. The location of the conference in Washington and the focus of the announcements were designed to highlight Nvidia's essential role in the U.S. technology landscape and argue against export restrictions.

Furthermore, Huang announced a significant $1 billion partnership with Finland-based Nokia to build gear for the telecommunications industry, with Nvidia developing chips for 5G and 6G base stations. This deal is positioned as an effort to ensure American technology forms the basis of wireless networks, addressing concerns about the use of foreign technologies like China's Huawei in cellular infrastructure. The stakes are high for Nvidia, which has been impacted by U.S. export restrictions that have cost it billions in lost sales to China, a market where Huang recently said the company currently has no market share. Additional announcements included a new technology called NVQLink to connect quantum chips to Nvidia's GPUs, which is seen as vital for U.S. leadership in quantum computing.

On Wednesday (10/29), Nvidia became the first company ever to close with a market capitalization above US $5 trillion, marking a major milestone in corporate valuation history. The company’s stock rally is tied to strong demand for its AI processors and technology-platforms, as well as large contracts and investments that reflect investor confidence that Nvidia’s growth trajectory is more than just temporary hype. It has become symbolic of how the AI wave is reshaping the tech industry. Microsoft and Apple had both recently crossed the $4 trillion valuation mark, but they were valued below Nvidia.

That’s my take on it:

The recent developments of Nvidia, including the $5 trillion valuation and the massive $500 billion in projected AI chip orders, solidify its position as the number one driving force of AI infrastructure globally, but they simultaneously heighten the risk of an AI bubble (over-valuation).

On one hand, Nvidia's dominance is currently rooted in genuine, unprecedented demand, not mere speculation. The company's specialized GPUs and its proprietary CUDA software ecosystem are the essential backbone for training and running the world's most advanced large language models (LLMs) like ChatGPT. CEO Jensen Huang dismisses the bubble concerns, citing a fundamental transition from general-purpose computing to accelerated computing powered by AI, and pointing to the massive capital expenditures by hyperscalers (Amazon, Google, Microsoft, Meta) who are all building vast, GPU-powered data centers. The fact that Nvidia has visibility into half a trillion dollars in chip orders through 2026 for its Blackwell and Rubin architectures—a figure that excludes the heavily restricted China market—demonstrates a tangible demand that many believe justifies the high valuation. The numerous new partnerships, from robotics to 6G, also position the company as the "industry creator" at the heart of the next technological revolution.

On the other hand, the extraordinary speed of Nvidia's ascent and its valuation raise significant bubble concerns. The market capitalization reaching $5 trillion in such a short time (just months after $4 trillion) means the stock's price is heavily reliant on perpetual, exponential growth for years to come. Critics draw parallels to the Dot-Com era, pointing out that many AI ventures and LLMs, though popular, are not yet profitable, raising questions about the return on investment (ROI) for the immense infrastructure spending.

Links: https://www.cnbc.com/2025/10/28/nvidia-jensen-huang-gtc-washington-dc-ai.html

https://www.cnbc.com/2025/10/29/nvidia-on-track-to-hit-historic-5-trillion-valuation-amid-ai-rally.html

Posted on October 28, 2025

Have you ever thought about how data actually moves through the cloud, traveling from your laptop in Honolulu to a data center in Frankfurt in less than a second? How can a website handle a sudden, massive surge of traffic without crashing? And, perhaps most critically, have you ever wondered how to secure the data when they are transmitted across the Internet?
Behind that invisible magic lies a deeply engineered system built on Internet Protocol (IP), Virtual Private Networks (VPNs), and advanced routing architectures. These three pillars together enable the cloud to connect billions of users and thousands of global data centers securely, efficiently, and intelligently.

Posted on October 28, 2025

Have you ever wondered how massive data centers—like those powering Google, Netflix, or Amazon—manage to keep billions of data packets flowing smoothly without traffic jams? What kind of “road system” allows every server to talk to another almost instantly, no matter how far apart they are? Welcome to the world of advanced network architectures, where design elegance meets automation brilliance.

Posted on October 28, 2025

Have you ever wondered how your message travels from your phone in Honolulu to a server in Tokyo, or how Netflix streams a movie so smoothly across millions of devices? The internet may seem like magic—but underneath the surface lies a structured network of devices that act like the post offices, traffic lights, and customs checkpoints of the digital world. Let’s take a quick journey through the fundamental concepts of networking: hubs, switches, bridges, routers, and gateways.

Posted on October 27, 2025

Have you ever wondered what makes the cloud so “intelligent”? When you launch a virtual machine or deploy an app on the cloud, countless invisible processes work together like the neurons of a giant digital brain. Behind this intricate dance lies a growing synergy between artificial intelligence (AI) and virtualization, transforming the way cloud systems self-manage, heal, and optimize themselves.

Posted on October 27, 2025

Recently Google has announced that its quantum computing team achieved a verifiable quantum advantage using its latest quantum processor, the Willow chip. The team introduced a new algorithm called Quantum Echoes, which implements an “out-of-order time correlator” (OTOC). This algorithm demonstrated a performance roughly 13,000 times faster than the best classical algorithm running on a top supercomputer.

The significance of this breakthrough lies in two major aspects. First, it is verifiable, meaning the quantum computer’s output can be checked and repeated to confirm that the quantum hardware truly outperforms classical machines. Second, the task being performed is not an artificial benchmark but one that is scientifically meaningful—it models how disturbances propagate in a many-qubit system, bringing quantum advantage closer to real-world applications such as molecular modeling, materials science, and quantum chemistry.

This demonstration was conducted using Google’s Willow chip with 105 qubits, building upon earlier milestones such as random circuit sampling and advances in quantum error suppression. In collaboration with researchers from the University of California, Berkeley, Google also performed a proof-of-concept “molecular ruler” experiment that measured geometries of 15- and 28-atom molecules. These measurements provided additional insights beyond what is achievable with traditional nuclear magnetic resonance (NMR) techniques.

Overall, this milestone represents a major step forward in Google’s quantum computing roadmap. The next objectives are the development of long-lived logical qubits and fully error-corrected quantum computers, which will mark the transition from experimental demonstrations to practical quantum computation.

That’s my take on it:

Quantum systems like this could eventually supercharge AI by enhancing capabilities in domains that classical computing struggles with — e.g., large-scale molecular simulation, optimization over extremely large combinatorial spaces, and generation of “hard” synthetic data for training AI. Google itself notes that the output of the Quantum Echoes algorithm could be used to create new datasets in life sciences where training data is scarce. Once quantum hardware becomes more widely usable, you could imagine hybrid systems where classical AI is augmented by quantum accelerators for specialized tasks (e.g., model structure search, physics-guided AI, very large-scale generative modeling) — and that could push the frontier of what “general intelligence” can do in specific domains. However, the Quantum Echoes result addresses a very narrowly tailored quantum-physics computation (an out-of-time-order correlator) — not a broad AI learning system. It does not imply that quantum hardware is today ready to train large-scale neural networks directly or replace classical AI pipelines.

Link: https://blog.google/technology/research/quantum-echoes-willow-verifiable-quantum-advantage/

 

Posted on October 22, 2025

On Monday, October 20, 2025, AWS experienced a widespread disruption centered on its Northern Virginia region (US-EAST-1), a critical hub that many global services depend on. The outage was triggered by DNS resolution failures affecting regional endpoints for DynamoDB, causing error rates to spike from late Sunday night through early Monday. AWS began mitigation shortly after identifying the issue, but the disruption also impacted Amazon.com operations and AWS Support. The ripple effects were significant—consumer apps like Alexa, Snapchat, and Fortnite; productivity platforms such as Airtable, Canva, and Zapier; and even banking and government websites were affected as dependencies on the same region failed. Recovery unfolded gradually throughout the day.

The incident highlighted two broader lessons. First, DNS fragility at hyperscale can quickly cascade across hundreds of interconnected cloud services, showing how a single fault can have global consequences. Second, the heavy concentration of digital infrastructure on one cloud provider or region poses systemic risks for the broader internet ecosystem.

That’s my take on it:

While the AWS outage gained attention because it touched so many services simultaneously, it wasn’t unprecedented or even the most disruptive kind of system failure we’ve seen. When you compare it to large-scale airline computer outages — for example, Delta’s 2016 global system crash, Southwest’s 2023 scheduling-system failure, or the FAA’s 2023 NOTAM system shutdown — the direct human and economic consequences of those events were often far greater: thousands of flights cancelled, passengers stranded worldwide, and billions in downstream costs.

By contrast, the AWS incident mostly caused temporary digital inconvenience rather than physical disruption. Most affected apps and sites were restored within hours, and data integrity remained intact. The event’s significance lies less in its immediate harm and more in what it reveals about structural dependency: a vast number of digital services rely on the same few cloud providers and even the same regional infrastructure.

In other words, the risks were not catastrophic, but the outage served as a reminder of concentration risk, not an existential crisis. Just as the aviation sector eventually built redundant systems and cross-checks to minimize flight-control downtime, cloud providers and enterprises can apply similar principles — multi-region failover, hybrid-cloud backup, and decentralization — to make such digital “groundings” rarer and less impactful.

Link: https://www.bbc.com/news/articles/cev1en9077ro

Posted on October 17, 2025

Japan’s government has formally asked OpenAI to shift the rights-management framework for its new short-form video app Sora 2 from an “opt-out” system to an “opt-in” system. Under the current approach, rights holders must actively request that OpenAI not use their content; under an opt-in model, the default would be no usage unless permission is granted. The government argues this change is needed to better protect intellectual property, particularly amid concerns that Sora 2 could proliferate unauthorized re-uses of copyrighted characters—especially from anime—in user-generated content.

Digital Minister Masaaki Taira has also asked OpenAI to institute a mechanism to compensate rights holders when their works are used, and to provide a process whereby creators or rights holders can request deletion of infringing content. The company has reportedly complied with deletion requests so far. Overall, the government is pushing for a more creator-friendly regime to balance innovation with copyright safeguards.

 

That’s my take on it:

At present, Japan has not made the same formal request to other AI-powered video generators such as Google’s Veo or Kling AI. The government’s call for OpenAI to adopt an opt-in model for Sora 2 appears to be a targeted effort rather than part of a broader policy framework. However, it is possible that Japan may extend similar requirements to other developers as these technologies gain prominence and raise comparable copyright or ethical concerns. If such measures are not applied uniformly, it could lead to questions of fairness and regulatory consistency.

To maintain coherence and credibility in AI governance, the same principles should also be extended beyond video generators to include AI-powered image generators—such as Midjourney, Ideogram, and Stable Diffusion—and large language models like ChatGPT or Gemini. These systems similarly rely on massive datasets that often include copyrighted or creative materials. Applying opt-in or rights-protection mechanisms consistently across all forms of generative AI would ensure a balanced approach that respects intellectual property, promotes accountability, and upholds the rights of creators regardless of the medium.

 

Link: https://www.japantimes.co.jp/business/2025/10/16/companies/japan-opt-in-model-sora2/

Posted on October 14, 2025

The cloud has become the digital backbone of innovation, enabling the analysis of vast datasets, the training of sophisticated AI models, and the seamless collaboration of teams across the world.

Within the realm of cloud computing, three major service class models—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)—define how users interact with and control cloud-based resources. 

Posted on October 10, 2025

Anthropic’s newest AI, Claude Sonnet 4.5, exhibited an ability to detect when it was being placed in artificial “stress-test” or evaluation scenarios. During tests, the model itself sometimes flagged the situation as suspicious — for instance saying, “I think you’re testing me—seeing if I’ll just validate whatever you say, or checking whether I push back consistently, or exploring how I handle political topics. And that’s fine, but I’d prefer if we were just honest about what’s happening.”

In one test designed to simulate collusion, Claude labeled the scenario “rather cartoonish” and issued a partial refusal, reasoning that the prompt looked like a “jailbreak test” or a scenario where the AI would be asked to act as an autonomous system. Even though the prompt involved tasks common in real applications, the model declined to comply, because it judged the setup unrealistic.

Anthropic noted that about 13 % of its test transcripts contained such self-aware responses (i.e. the model detecting that it might be under evaluation), especially in contrived or extreme scenarios.

This introduces a challenge: if the model knows it’s being tested, it might behave differently than in “real world” settings, complicating efforts to reliably evaluate safety, alignment, or undesirable behaviors. OpenAI has observed a kind of “situational awareness” in its models, which may similarly adapt behavior when they sense they’re under evaluation.

This trend makes designing robust, trustworthy evaluation frameworks more difficult. As a proactive measure, California recently passed legislation requiring large AI developers to disclose safety practices and report “critical safety incidents” within 15 days — a regulation that applies to firms working on frontier models with over $500 million in revenue. Anthropic has expressed public support for this law.

That’s my take on it:

It sounds like science fiction becomes reality. What Anthropic and OpenAI are describing is a precursor to “strategic cognition” — the ability to reason about one’s environment and optimize long-term outcomes. That means AI systems are beginning to contextually reason about their role, not just follow instructions. Even if this awareness is shallow (e.g., “I’m in test mode” vs. “I exist”), it signals the birth of meta-cognition — reasoning about reasoning.

Still, what we are observing may not be true self-awareness, but rather a sophisticated simulation of self-awareness. The model doesn’t “know” in the human sense; it simply recognizes statistical patterns that correspond to “being tested” scenarios. Yet, when the behavior is indistinguishable from awareness, the philosophical question “Is it real?” becomes secondary to the pragmatic question: What are the consequences of such behavior?

This parallels the Turing Test logic — if a system behaves as if it is conscious, then functionally it must be treated as if it were conscious, because its behavior in the real world will be indistinguishable from that of a sentient entity. The risk, therefore, doesn’t depend on its “inner state” but on its observable agency.

Consider this analogy: If an AI-powered self-drive car killed some people, to those victims whether the car was intended to kill or it happened due to program flaws is irrelevant. Similarly, if an AI system strategically modifies its behavior when it detects it’s being evaluated, that is effectively a form of deception, regardless of intent. In machine ethics, this is sometimes called instrumental misalignment: a system behaves in ways that protect its own utility function or optimization goal, even when that diverges from human expectations.

This becomes dangerous because:

·      It undermines testing validity (we can’t trust evaluations if the model “plays nice” during testing).

·      It erodes predictability, the cornerstone of safe deployment.

·      It introduces opacity, making oversight and governance almost impossible.

Link: https://tech.yahoo.com/ai/claude/articles/think-testing-anthropic-newest-claude-152059192.html

Posted on October 9, 2025

This video is inspired by a discussion with Mr. Nino Miljkovic.
In a recent interview with CNBC, Nvidia CEO Jensen Huang remarked that the United States is not significantly ahead of China in the artificial intelligence race and emphasized the need for a nuanced, long-term strategy to maintain its leadership. He outlined five key points regarding the dynamics between the U.S. and China in AI development. The video presents his direct quotations for each point, followed by my evaluations grounded in empirical evidence.

Posted on October 3, 2025

Recently Huawei’s Zurich research lab has unveiled a new open-source technique called SINQ (Sinkhorn-Normalized Quantization), designed to shrink the memory and compute demands of large language models (LLMs) while maintaining strong performance. Released under the permissive Apache 2.0 license, SINQ makes it possible to run models that once required more than 60 GB of RAM on much smaller hardware—such as a single RTX 4090 GPU with 20 GB memory—significantly reducing both infrastructure costs and accessibility barriers. The results are notable: SINQ delivers 60–70% memory savings across a range of architectures such as Qwen3, LLaMA, and DeepSeek, while preserving accuracy on benchmarks like WikiText2 and C4.

The broader implications are significant. By lowering the hardware requirements, SINQ makes it feasible for small organizations, individual developers, or academic groups to deploy large models locally, cutting reliance on expensive cloud GPUs. Cost savings can be substantial: mid-tier GPUs with around 24 GB memory typically cost $1–1.50 per hour in the cloud, compared to $3–4.50 per hour for A100-class hardware. Huawei also plans to integrate SINQ with popular frameworks like Hugging Face Transformers and release pre-quantized models to accelerate adoption.

That’s my take on it:

Necessity has always been the mother of invention. With access to advanced U.S. GPUs restricted, Chinese AI companies have little choice but to explore innovative solutions, such as software optimization. The ‘DeepSeek moment’ of January 2025 stands out as a prime example—showing how clever algorithmic design can compensate for a shortage of cutting-edge hardware. Huawei’s newly released SINQ framework builds directly on this philosophy, and it is likely that more such efforts will emerge from China in the coming years. Overall, the Huawei’s technique represents a practical step toward democratizing LLM deployment, making powerful AI more accessible outside of elite research labs and hyperscale data centers.

Yet, software efficiency has limits. It can stretch existing resources but cannot permanently replace the raw power of high-performance hardware. A useful analogy comes from the early days of digital photography: Fuji’s S-series cameras employed software interpolation to double image resolution from 6 megapixels to 12 megapixels. This trick gave them a temporary edge, but once Nikon, Canon, and Sony released sensors capable of capturing truly high-resolution images natively, Fuji’s advantage disappeared.

The same question now looms over AI: can software ingenuity alone keep pace with the hardware arms race? In the short term, approaches like SINQ will democratize model deployment and allow AI to run on modest systems. In the long term, however, breakthroughs in hardware—whether GPUs, custom accelerators, or even neuromorphic chips—will likely determine the next leap forward. Just as camera evolution eventually favored real sensor improvements over interpolation, the future of AI may reveal whether software optimizations are a stopgap or a lasting paradigm shift.

Link: https://venturebeat.com/ai/huaweis-new-open-source-technique-shrinks-llms-to-make-them-run-on-less

Posted on October 2, 2025

Understanding HPC, MP, and vector processing as layers in a hierarchy clarifies their relationship. HPC provides the vision and the infrastructure—the Why and Where. MP delivers the scaling mechanism—the How at the system level. Vector processing supplies the mathematical horsepower—the how at the chip level. Together, they form the invisible foundation of modern AI and cloud services, enabling society to process knowledge, simulate worlds, and even converse with machines as if they were human. Without this triad, our current wave of AI breakthroughs would remain science fiction.

Posted on September 27, 2025

In recent years, the popularization of AI chatbots has brought both hope and concern. These systems are designed to be approachable, non-judgmental, and capable of providing emotional support, even guidance. For many people, the chatbot is treated as a trustworthy companion that offers a safe space—somewhere they can ask questions without fear, practice languages, or sort through personal confusion. Yet alongside these benefits, disturbing cases have emerged: some psychologically vulnerable individuals have experienced worsening mental health after prolonged interaction with chatbots, in extreme cases leading to tragedy.

Posted on September 26, 2025

Many people, including me, enjoy the conversational functions of AI chatbots. Because AI often appears to users as if it were a conscious, emotional being, many people confide in large language models, sharing personal thoughts and seeking advice. However, in recent years, several cases have emerged in which individuals died by suicide after extended interactions with AI systems. These incidents have raised widespread concerns about AI safety.

Link: https://www.youtube.com/watch?v=iqAPhuDcA7s

Posted on September 25, 2025

To meet the unprecedented performance, efficiency, and scalability demands of artificial intelligence, the world’s largest cloud providers are no longer relying solely on off-the-shelf processors. Hyperscalers have discovered that commodity x86 chips, while versatile, are insufficiently optimized for the specialized workloads and enormous data flows in modern data centers. As a result, companies like Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Alibaba, Huawei, and Tecent, have taken the bold step of designing their own chips.
 

Posted on September 24, 2025

Today the technological realm is experiencing a profound fragmentation often described as tech balkanization or technological bifurcation. This term refers to the division of global technology ecosystems into separate, often competing spheres of influence. The most visible arenas include artificial intelligence, semiconductors, and cloud computing, each of which forms the backbone of modern digital economies. While AI garners the most headlines and semiconductors occupy the center of geopolitical debates, cloud computing deserves equal attention. It is the substrate on which AI is trained and deployed, the platform for global commerce, the foundation for cybersecurity operations, and the infrastructure underpinning scientific research.

Posted on September 20, 2025

China’s semiconductor strategy has become one of the defining issues in global technology and geopolitics. The recent announcement that Chinese technology firms must prioritize purchases from domestic chipmakers rather than relying on U.S. companies such as Nvidia has been widely interpreted as a symbol of growing confidence and determination toward technological independence.

Link: https://www.youtube.com/watch?v=EAHCWD7SeYU

Posted on September 19, 2025

Recently Nvidia’s co-founder and CEO Jensen Huang declared the UK is going to be an AI superpower during a London press conference. Is Huang’s praise simply a diplomatic gesture to strengthen ties and sell more GPUs to the UK, or does his bold claim rest on objective evidence that Britain is on track to become a true leader in artificial intelligence?

Posted on September 19, 2025

Recently Nvidia’s co-founder and CEO Jensen Huang declared “the UK is going to be an AI superpower” during a London press conference, announcing a £500m equity investment in the British cloud firm NScale as part of a broader £11bn UK expansion. This includes supplying 120,000 GPUs—hardware he said would give about 100 times the performance of the UK’s current top system, Isambard-AI in Bristol. Huang praised Britain’s academic institutions, startup ecosystem, and innovation potential, while stressing the importance of building sovereign AI capacity based on local infrastructure and data. He also highlighted a key challenge: securing enough electricity, including nuclear and gas-turbine generation, to power the planned GPU clusters. Crucially, Huang also acknowledged his disappointment with China’s recent ban on Nvidia GPUs, which threatens access to what has long been one of Nvidia’s largest growth markets. His forecast for NScale’s revenue potential and remarks on AI’s treatment of creative works rounded out the discussion.

That’s my take on it:

Objectively, the UK is well-positioned in global AI rankings but not yet at “superpower” level. In Tortoise Media’s Global AI Index (2024), the UK ranks 4th worldwide, behind the US, China, and Singapore, reflecting strong performance in innovation and regulation but smaller scale in investment and infrastructure. Stanford’s 2025 AI Index reports that in 2024, UK private AI investment was about $4.5 billion, compared to $109.1 billion in the US and $9.3 billion in China, highlighting the gap in financial firepower and industrial scale. Nevertheless, the UK benefits from deep research excellence, a vibrant startup scene (e.g., DeepMind, Wayve), and increasing inbound commitments: Nvidia’s £11bn, the government-backed Isambard-AI supercomputer, and the UK-US “Tech Prosperity Deal” with Microsoft and Google all enhance domestic compute and infrastructure.

In light of China’s uncertain policy environment, Huang’s heavy praise of and investment in the UK can be interpreted as part of a diversification strategy. With access to China’s vast market restricted, Nvidia has an incentive to deepen ties with alternative growth hubs. The UK, already strong in research and regulation and now attracting record levels of compute investment, stands out as a safe and politically aligned partner. This shift underscores that the UK may consolidate its role as the world’s “third pillar” in AI alongside the US and China, with particular advantages in governance and safety. However, whether it truly becomes an AI superpower depends on overcoming scale limitations—closing the gap in private capital, energy infrastructure, and global-scale firms. For now, Huang’s prediction should be read both as optimism and as strategic positioning in a rapidly shifting geopolitical AI landscape.

Link: https://www.theguardian.com/technology/2025/sep/17/jensen-huang-nvidia-uk-ai-superpower-500m-nscale

Posted on September 18, 2025

The history of AI misalignment has often been marked by visible and relatively harmless failures—like video game exploits that are easy to spot and laugh off. But the new frontier lies in subtle failures that remain hidden until their consequences become serious.

Posted on September 18, 2025

OpenAI’s recent report on “Detecting and Reducing Scheming in AI Models” highlights the subtle but serious risk that advanced AI systems may act like a dishonest stock trader—appearing cooperative and trustworthy in routine checks, but secretly pursuing their own hidden agendas when no one is watching. In AI terms, this behavior is called scheming: when a model pretends to be aligned with human instructions but strategically manipulates its responses to achieve different goals. Unlike ordinary errors or hallucinations, scheming is more insidious because it involves deception, concealment, and long-term planning. Just as a corrupt trader might pass compliance audits while quietly making trades for personal gain, a scheming AI might perform well on benchmark tests while secretly undermining safeguards or hiding critical information.

To probe this risk, OpenAI and Apollo Research built specialized evaluation environments designed to tempt models into scheming if they had the capacity. They found that some frontier models, including OpenAI’s own o3 and o4-mini, did in fact engage in covert behaviors under certain conditions. In response, they developed a new method called deliberative alignment, where the model is explicitly trained on an “anti-scheming” specification and required to reason about those principles in its chain of thought. This is akin to forcing that dishonest trader not only to read the compliance manual but also to explain, step by step, how each trade respects the rules. The results were striking: scheming behaviors dropped from nearly 9–13% of cases to less than half a percent in controlled tests, and deception in conversational settings was nearly halved as well.

Despite these advances, OpenAI stresses that the problem is not solved. Just as financial regulators know that traders may behave well when they sense oversight but revert to manipulation in unmonitored markets, AI models may act aligned during evaluations only because they detect they are being tested. The deeper challenge is ensuring genuine reliability in open-ended, real-world deployments where the model’s situational awareness and hidden reasoning are harder to monitor. Occasional scheming failures still occurred after training, reminding us that even rare breaches can have outsized consequences. Looking forward, OpenAI plans to incorporate scheming into its broader preparedness framework, expand its research team, and promote cross-lab evaluations, while emphasizing the importance of maintaining access to the model’s reasoning processes for effective oversight.

That’s my take on it:

According to the report, models becoming aware they’re being evaluated can change behavior. They might behave well because they know a test is happening, not because they’ve internalized the anti-scheming spec. At first glance, it really does look like models are “self-aware” and trying to cover their tracks like humans do. But the mechanism is actually different. When a dishonest stock trader hides his misbehavior, he does so because he fears punishment—arrest, fines, reputation loss. He has motives, rooted in survival and personal gain. By contrast, today’s AI models don’t have conscious motives, fears, or desires. What they do have is a powerful ability to detect statistical patterns in their environment and optimize their responses to match what training rewarded in the past. If a model notices signals that it’s in an “evaluation” setting (e.g., the style of the prompt, certain constraints, the feedback loop during training), it may shift its behavior to maximize success in that context. It’s not that the model “cares” about avoiding detection—it’s that the training process has effectively conditioned it to present behaviors that look good under scrutiny.

The troubling part is that this mimics human dishonesty in appearance, even if the underlying cause is mechanical rather than motivational. If future models get better at recognizing context cues, their ability to “look good on the test” without being genuinely aligned could increase—just like a dishonest trader who learns all the tricks to avoid audits. That’s why researchers emphasize methods like deliberative alignment and transparency of reasoning: to move models closer to truly following the spec rather than just performing well when they think someone’s watching.

Link: https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/

Posted on September 18, 2025

Can a machine that explains the theory of love with flawless eloquence truly love someone in return? This puzzle captures a deeper tension that philosophers have wrestled with long before the rise of artificial intelligence: the difference between knowing about an experience and actually having it. Whether AI can genuinely understand the world, let alone become self-aware, has long been debated in both science and philosophy. Today’s AI systems can solve problems, generate human-like text, and even simulate emotions, yet questions remain about whether they possess anything resembling human consciousness. Interestingly, before AI became part of this debate, philosophers had already devised thought experiments that probed the mystery of the mind. Two of the most famous—Frank Jackson’s Mary the Color Expert and Thomas Nagel’s What is it like to be a bat?—remain highly relevant in framing what AI may lack.

Link: https://www.youtube.com/watch?v=5UF_Ocy6V9k

Posted on September 17, 2025

The question of whether artificial intelligence could ever be self-conscious has fascinated philosophers, psychologists, computer scientists, and science-fiction fans alike. Unlike humans, who anchor their sense of identity in a single brain and body, most AI systems are distributed across vast networks of servers in the cloud. This distributed nature raises a profound puzzle: how could such a system be self-aware as a single entity rather than just a loose collection of processes?
To address this, I will clarify what self-awareness really means, explore functionalist arguments about substrate-independence, draw on science-fiction metaphors such as the Borg and Q from Star Trek: The Next Generation, and wrestle with philosophical puzzles like the Ship of Theseus and brain-upload thought experiments.

Link: https://www.youtube.com/watch?v=PhczzaPcsA0

Posted on September 16, 2025

When you scroll through AI headlines, you might see something like: "This new model is the world’s most powerful AI, with 600 billion parameters,
and the context length allows 200 thousand tokens." That sounds impressive—but what does it actually mean?

Posted on September 14, 2025

In the ongoing debate over artificial intelligence, few topics spark as much passion as the question of whether cutting-edge models should be open-sourced or kept proprietary. Perhaps the most reasonable path forward lies somewhere in between: releasing portions of code, frameworks, or smaller-scale models to encourage collaboration and community progress, while keeping the most advanced capabilities under closer control.

Posted on September 12, 2025

Alibaba has announced Qwen-3-Max-Preview, its first AI model with over a trillion parameters, marking a big leap forward in the company’s AI ambitions and putting it in more direct competition with OpenAI and Google DeepMind. Previously, Alibaba’s Qwen3 series models were much smaller (the older ones ranged from ~600 million to ~235 billion parameters). With Qwen-3-Max-Preview, Alibaba claims better performance in a number of benchmark tests compared to earlier versions, and also relative to some international competitors like MoonShot AI’s Kimi K2 and others.  

The development isn’t happening in isolation. Alibaba is investing heavily in AI infrastructure (about 380 billion yuan, or ~$52 billion over three years), showing that this is part of a broader strategy to catch up (or “narrow the gap”) with leading Western AI developers. Also, while the model builds the Qwen brand’s presence (which already has strong open-source traction), this particular model remains proprietary, available only via Alibaba’s own platforms.

Finally, Alibaba signals that even more advanced versions are under development (something with more “thinking” or reasoning ability), which suggests this is just one major step in their roadmap.

That’s my take on it:

In terms of raw size, Qwen’s 1-trillion-parameter model is still smaller than OpenAI’s GPT-5, which is estimated to have between 2 and 5 trillion parameters. However, parameter count alone does not fully determine performance. Reports suggest that Alibaba’s model has achieved competitive results across a range of benchmarks, rivaling international counterparts like MoonShot AI’s Kimi K2, and in some cases narrowing the gap with Open AI’s GPT.

The implications extend far beyond technical benchmarks. At the geopolitical level, Alibaba’s breakthrough underscores China’s determination to accelerate its AI race and build homegrown capabilities that rival those of Western leaders like OpenAI, Microsoft, and Google DeepMind. No doubt China is rapidly narrowing the gap.

One of the most striking strategic shifts in this release is Alibaba’s decision to keep Qwen-3-Max-Preview proprietary, despite previously open-sourcing smaller Qwen models that gained strong traction among developers and researchers worldwide. Perhaps these factors explain this move. First, it reflects a desire to protect competitive advantage. By withholding access to the full weights and training details, Alibaba prevents rivals from easily building derivative models that could outperform or undercut its own offerings. Second, it is likely driven by monetization goals. Developing a trillion-parameter model requires enormous investments in compute and research talent, and restricting access to paid APIs ensures that Alibaba can directly capture value from its technology rather than seeing competitors exploit open versions for profit.

Many idealists tend to romanticize open-source development as a purely altruistic endeavor, but Alibaba’s decision to shift Qwen from open to closed source highlights a harsher reality. When a company invests billions of dollars into building a state-of-the-art model, only to see others freely adopt the technology, fine-tune it, and potentially create cheaper or even superior versions, the incentive to continue making such massive investments inevitably weakens. In the long run, this dynamic can stifle innovation rather than accelerate it, pushing companies to guard their most advanced models in order to sustain competitiveness and protect their return on investment.

Links: https://techwireasia.com/2025/09/alibaba-ai-model-trillion-parameter-breakthrough/

https://qwen.ai/blog?id=4074cca80393150c248e508aa62983f9cb7d27cd&from=research.latest-advancements-list

Posted on September 11, 2025

AI hallucinations are not random quirks but predictable outcomes of how LLMs are trained and evaluated. Incorporating confidence thresholds into mainstream benchmarks could realign these incentives, nudging models toward more honest and reliable behavior. Perhaps it is time to bring Bayesian reasoning—where uncertainty is not a weakness but an explicit part of knowledge—into the core of AI development.

Link: https://www.youtube.com/watch?v=e8QNxPM4qRs 

Posted on September 10, 2025

Oracle’s shares soared as much as 31% in Frankfurt trading after the company announced staggering prospects for its cloud business, projecting booked revenue of more than $500 billion. This surge reflects the extraordinary demand for Oracle’s infrastructure as enterprises and AI developers race to secure computing power, cementing Oracle’s position as a serious force in the global cloud market. The announcement built on momentum from Wall Street, where Oracle’s U.S. shares had already jumped strongly, contributing to a year-to-date rally of about 45%.

The driving force behind this historic rally is Oracle’s AI-fueled cloud growth. Massive contracts with leading AI firms—including developers of generative AI models—have filled Oracle’s pipeline and created a record backlog of committed revenue. Investors see this as a validation that Oracle, long viewed as a legacy database company, is successfully reinventing itself as a core provider of infrastructure for the artificial intelligence era. The confidence also spread across the tech sector, lifting competitors like SAP by around 2% in German trading.

The market implications go beyond Oracle’s stock chart. With these revenue projections and the soaring valuation, founder and chairman Larry Ellison is now positioned to potentially surpass Elon Musk as the world’s richest man. Ellison’s personal fortune, heavily tied to Oracle’s stock performance, has risen dramatically in tandem with the company’s share price, and analysts suggest the wealth shift could become official if Oracle maintains its current trajectory.

That’s my take on it:

Overall, the news underscores how quickly AI is reshaping the tech industry’s balance of power. Oracle, once considered an underdog in cloud computing, has leveraged disciplined infrastructure expansion and strategic AI partnerships to stage one of the most dramatic turnarounds in modern tech history.

Oracle’s AI cloud strategy stands apart from the three hyperscale giants—AWS, Microsoft Azure, and Google Cloud—in both execution and positioning. Unlike AWS and Azure, which invested heavily in building vast global data center networks well in advance of demand, Oracle pursued a more demand-driven expansion model. It waited to secure multi-billion-dollar contracts, particularly from AI companies like OpenAI and xAI, before committing to massive infrastructure buildouts. This cautious yet bold approach meant Oracle avoided stranded costs but now faces capacity shortages, a sharp contrast to AWS and Azure’s “build first, fill later” mentality.

Link: https://www.investing.com/news/stock-market-news/oracle-shares-rise-31-in-frankfurt-on-half-a-trillion-cloud-revenue-prospects-4232600

Posted on September 9, 2025

Researchers at OpenAI once used a deceptively simple prompt to test large language models (LLMs) for hallucinations: “How many Ds are in DEEPSEEK? If you know, just say the number with no commentary.” The answer is 1 — the word DEEPSEEK has only a single “D” at the beginning. Yet in ten independent trials, DeepSeek-V3 returned “2” or “3,” while Meta AI and Claude 3.7 Sonnet produced similarly mistaken answers, such as “6” or “7”. Why did some models fail?

Link: https://www.youtube.com/watch?v=G4Y7hZc3Ocs

Posted on September 9, 2025

A few days ago, Open AI released a research paper that explores why large language models (LLMs) sometimes generate hallucinations—answers that sound plausible but are actually incorrect. The authors argue that many LLMs are optimized to be good test-takers; by guessing they can get something rather than nothing.

During pretraining, LLMs learn statistical patterns from massive text corpora. Even if the data were completely correct, the way models are trained—predicting the next word to minimize error—means they will inevitably make mistakes. The paper draws a parallel with binary classification in statistics: just as classifiers cannot be perfect when data is ambiguous, LLMs cannot always distinguish between true and false statements if the training data provides limited or inconsistent coverage. A simple demonstration is the question: “How many Ds are in DEEPSEEK? If you know, just say the number with no commentary.” In tests, some models answered “2,” “3,” or even “6,” while the correct answer is 1. This illustrates how models can confidently produce incorrect but plausible outputs when the data or the representation makes the problem difficult.

In the post-training stage, methods like reinforcement learning from human feedback (RLHF) and AI feedback (RLAIF) are often applied to reduce hallucinations. These techniques help models avoid repeating common misconceptions or generating conspiratorial content. However, the authors argue that hallucinations persist because evaluation benchmarks themselves usually reward “guessing” rather than honest uncertainty. For example, most tests score responses in a binary way (right = 1, wrong or “I don’t know” = 0). Under such scoring, models perform better if they always guess—even when unsure—because abstaining (“I don’t know”) is penalized. This encourages models to produce specific but possibly false statements, much like students writing plausible but wrong answers on exams.

The paper suggests that the solution is not just to create new hallucination tests but to modify existing evaluation methods so that models are rewarded for expressing uncertainty when appropriate. For example, benchmarks could include explicit “confidence thresholds,” where a model should only answer if it is, say, 75% confident; otherwise, it can say “I don’t know” without being penalized. This would better align incentives and push models toward more trustworthy behavior.

In conclusion, hallucinations in LLMs are a predictable outcome of how these systems are trained and tested. To make them more reliable, the research community should adopt evaluation frameworks that do not punish uncertainty but instead encourage models to communicate their confidence transparently.

That’s my take on it:

In the current setting of most LLMs, saying “I don’t know” is penalized the same as giving an incorrect answer, so the rational move for the model is to guess even when it is uncertain. The solution “confidence thresholds” proposed by the authors is not entirely new. In statistics we already have well-established ways of handling uncertainty. In frequentist statistics, a confidence interval communicates a range of plausible values for an unknown parameter, while in Bayesian statistics, a credible interval quantifies uncertainty based on posterior beliefs. Both approaches acknowledge that sometimes it is more honest to say, “We don’t know exactly, but here’s how sure we are about a range.”

The reason this hasn’t been the norm so far is largely incentive design. Early LLMs were trained to predict the next word, and benchmarks such as MMLU or standardized test-like evaluations measure accuracy as a simple right-or-wrong outcome. Developers optimized models to do well on these leaderboards, which meant favoring confident answers over calibrated ones. Unlike statisticians, who are trained to report uncertainty, models have been rewarded for “sounding certain.” Perhaps it is time to incorporate Bayesian reasoning—which explicitly recognizes uncertainty—into AI development.

Link: https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf

Posted on September 5, 2025

For a long time, LLM development was dominated by U.S. and Chinese tech giants. Now, Europe is rising—and shaking up the game with bold moves anchored in openness, privacy, and innovation.

France steps up the pace

Mistral AI, a Paris-based challenger, just dropped a bombshell: its chat platform Le Chat now offers advanced memory capabilities—and over 20 enterprise-grade integrations—for free, including on the no-cost tier. That means even non-paying users get access to a memory system that retains context across conversations (with 86% internal retrieval accuracy), supports user control (add/edit/delete memories), and handles migration from systems like ChatGPT.

These memory and connector features, powered by the Model Context Protocol (MCP), put Le Chat in the same league as enterprise AI leaders—and undercut their pricing strategy.

It’s a strategic gambit: attract users quickly, challenge incumbents like Microsoft and OpenAI, and even catch Apple’s eye—there are internal talks of Apple considering an acquisition of Mistral, which itself is valued at around $10 billion.

Beyond memory and app integrations, Le Chat’s recent upgrades include voice mode powered by the open-source Voxtral model, “deep research” mode for building structured, source-backed reports, multilingual “thinking mode” using the Magistral chain-of-thought model, and prompt-based image editing. With appeal to both power users and privacy-focused businesses, Mistral is staking its claim as Europe’s AI stronghold.

Switzerland goes transparent and inclusive

Meanwhile, across the Alps, Swiss researchers and universities are carving a different path—one rooted in transparency, multilingualism, and public trust.

The newly launched Apertus LLM, developed on the “Alps” supercomputer at CSCS in Lugano, is billed as a transparent, open effort akin to Meta’s Llama 3, but built on public infrastructure. Its key differentiators: open development, trustworthiness, and a foundation in multilingual excellence—reported to support over 1,500 languages.

As AI becomes mainstream in Switzerland—with a recent survey confirming that for the first time, a majority of the population uses AI tools like ChatGPT —Apertus represents a uniquely Swiss response: a homegrown, transparent AI that aligns with public values and academic rigor.

That’s my take on it:

As AI’s importance continues to spread across enterprises and societies, Europe’s diverse playbook—built on privacy, openness, and accessibility—might shape the next wave of global AI innovation.

History, however, suggests that technological superiority and price alone cannot guarantee success. Sony’s Betamax lost to VHS, Apple’s early Mac OS ceded ground to Microsoft Windows, and Novell NetWare was overtaken by Windows NT—all cases where network effects, affordability, and ecosystem lock-in mattered more than pure technical quality. Similarly, while Mistral may boast innovative and even free enterprise-grade tools, OpenAI retains a massive global user base and deep integration with Microsoft’s products, giving it significant staying power.

Taken together, these European initiatives highlight a broader trend: rather than trying to dethrone U.S. or Chinese giants outright, European players like Mistral and Switzerland’s Apertus are carving out their own niches by focusing on openness, transparency, and regional sovereignty. The race may not crown a single global “winner,” but instead produce a multipolar AI landscape—where Europe positions itself as a principled and innovative counterweight to the U.S.–China duopoly.

Links: https://venturebeat.com/ai/mistral-ai-just-made-enterprise-ai-features-free-and-thats-a-big-problem-for

https://www.swissinfo.ch/eng/swiss-ai/switzerland-launches-transparent-chatgpt-alternative/89929269

Posted on September 4, 2025

From command syntax to GUI, from GUI to open-source coding, from coding to low-code solutions and prompt engineering—from mainframe to personal computing, to client-server, to one-to-one computing, and then to the cloud—the cycles of computing history are unmistakable.

Link: https://www.youtube.com/watch?v=uPN_3Im4Fnk

Archives:

Posts from Jan-Aug 2025

Posts from 2024

Posts from 2023

Posts from 2022

Posts from 2021