Newsroom

Po



Hosted by Chong Ho (Alex) Yu,   SCASA President-Elect

Posted on November 23, 2024

In a new article published to the arXiv preprint database, MIT, Harvard and Cornell researchers found that Large language models (LLMs) like GPT-4 and Anthropic's Claude 3 Opus struggle to accurately model the real world, especially in dynamic environments. This fragility is highlighted when LLMs are used for navigation. Unexpected changes, such as detours or closed streets, can lead to significant drops in LLMs’ accuracy or total failure.

LLMs trained on random data formed more accurate world models compared to those trained on strategic processes. This is possibly because random data exposes the models to a wider variety of possible steps, even if they are not optimal. The study raises concerns about deploying AI systems in real-world applications, such as driverless cars, where dynamic environments are common. The researchers warn that the lack of coherent world models in LLMs could lead to malfunctions.

That’s my take on it:

The disconnect between clean models and the messy real world is not a new problem. In fact, it mirrors existing challenges in conventional statistics. In parametric statistics, we often make unrealistic assumptions about data structures, such as normality and independence. Robustness to non-normality, heteroskedasticity, and other violations of these assumptions is a highly sought-after feature, and similar principles may apply to LLMs. We expect clean data, rely on linear models despite most real-world relationships being non-linear, and treat experimental methods as the gold standard.

While controlled environments provide clarity and reproducibility, they often fail to capture the richness and unpredictability of real-world scenarios. Similarly, training LLMs on strategically optimized data may cause them to overfit to specific patterns, limiting their generalizability. A promising approach to address this challenge could be to combine LLMs with other models, such as reinforcement learning agents trained in dynamic simulations, to enhance their understanding of complex and dynamic environments.

Link: https://arxiv.org/pdf/2406.03689

Posted on November 20, 2024

A recent article titled " AI-Assisted Genome Studies Are Riddled with Errors" by Dr. Sitaraman highlights the challenges and errors associated with using artificial intelligence (AI) in large genomics studies. Researchers have employed AI to fill in gaps in patient information and improve predictions in genome-wide association studies (GWAS). However, new research from the University of Wisconsin-Madison reveals that these AI-assisted approaches can lead to false positives and misleading correlations.

For 15 years, GWAS has been used to identify genetic variants associated with traits or diseases. Despite its success, GWAS has limitations, which scientists have attempted to overcome using AI. However, AI can introduce biases, especially when working with incomplete datasets. The research highlights that AI-assisted GWAS can create false associations between gene variants and diseases. For instance, AI models showed a high correlation between certain gene variants and type II diabetes, which was not supported by conventional GWAS. Further, the use of proxy data, such as family history, in GWAS-by-proxy (GWAX) can also lead to incorrect conclusions. For example, AI approaches showed a positive correlation between education attainment and Alzheimer's risk, contrary to established GWAS findings. The research team suggests new statistical methods to correct these biases and emphasizes the need for transparency and rigor in reporting findings from AI-assisted studies.

That’s my take on it:

No doubt machine learning methods have overshadowed conventional statistics in big data analytics. However, no solution is 100% foolproof. Like conventional statistics, machine learning methods could be misguided and misused. We should avoid the mentality that “I use a hammer and so every problem is a nail”: don’t apply ML just because it is popular or powerful, and then mindlessly assume that the conclusion must be right. Rather, we must consider the nature of the data, the question being asked, and the desired outcome. When a method is experimental and the data pattern is strange, we must evaluate it with a pair of skeptical eyes. After all, skepticism is the principle of Tukey’s exploratory data analysis.

Link: https://www.the-scientist.com/ai-assisted-genome-studies-are-riddled-with-errors-72339

Posted on November 15, 2024

According to the South China Morning Post, Doubao, a ByteDance’s conversational AI bot developed by ByteDance launched in August, has quickly become China's most popular AI app, boasting 51 million monthly active users. This far exceeds the user bases of Baidu’s Wenxiaoyan (formerly known as Ernie Bot) with 12.5 million users and Moonshot AI’s Kimi, backed by Alibaba Group, with 10 million users.

Doubao prioritizes personalization and a human-like interaction experience, aiming to make AI more accessible. Doubao's diverse features include writing assistance, summarization, image, audio, and video generation, data analysis, and AI-powered online search. Within three months, it introduced over 20 new skills, earning praise for its effective text editing, logical content organization, and user-friendly design.

That’s my take on it:

While Doubao has demonstrated remarkable growth and capabilities, it’s difficult to directly compare it to global AI tools like ChatGPT, Claude, or Perplexity AI without standardized benchmarks. This highlights a growing divergence in the global AI landscape. Much like the broader internet in China, which is heavily regulated under the Great Firewall since its implementation in 1996, the AI market is shaped by domestic policies and international competition. The Great Firewall restricts access to foreign websites, leading to the creation of Chinese alternatives to global platforms, such as Baidu instead of Google and WeChat instead of WhatsApp. These restrictions mean that Chinese internet users and users in other countries often have vastly different online experiences and knowledge bases.

This pattern extends to AI, where China's market is dominated by domestic products due to regulatory constraints that limit access to global AI tools like ChatGPT, Claude, Google Gemini, and Perplexity AI. These American AI companies choose not to operate in China due to difficulties in complying with local laws and regulations regarding AI and information control. As technology advances, it raises a critical question: does it bring people closer together, or does it reinforce divisions? The parallel growth of distinct digital ecosystems suggests that technology, while offering unprecedented possibilities, also has the potential to deepen divides.

Link: https://www.scmp.com/tech/tech-trends/article/3286276/chinas-hottest-ai-bot-bytedances-doubao-tops-charts-51-million-active-users?module=top_story&pgtype=subsection

Posted on November 13, 2024

Today, Rguroo announced the release of Rguroo Version 2.0, titled “Journey to Data Science.” Among its major updates is an enhanced logistic regression feature. The new diagnostic tools include interactive logistic curve plotting, prediction assessments, external data prediction, model validation, and k-fold cross-validation. Rguroo is a web-based statistical platform with a graphical user interface (GUI) that provides access to R’s capabilities without requiring users to know R programming.

That’s my take on it:

While Rguroo benefits analysts and students by making R’s statistical and graphical tools more accessible, I am uncertain whether this release truly represents a “journey to data science.” After exploring the software, I noticed it lacks core data science methods, such as decision trees, random forests, XGBoost, gradient boosting, and neural networks. The focus on logistic regression—while a valuable classical statistical method—reflects a model-driven, inference-centered approach, rather than the data exploration and pattern recognition that define data science and machine learning. In fact, this highlights a broader issue: many so-called “data science” tools and programs don’t fully reflect the paradigm shift toward data-centric methodologies.

Link: https://rguroo.com/

Posted on November 8, 2024

According to Forbes Advisor, the top data visualization tools for business in 2024 are as follows:

Microsoft Power BI: Leader in business intelligence (BI) with robust integration capabilities

Tableau: Known for sophisticated interactive visualizations

Qlik Sense: Stands out for AI integration and machine learning features

Klipfolio: Excels in custom dashboard creation

Looker: Provides comprehensive visualization options and data modeling

Zoho Analytics: Seamlessly integrates with other Zoho products

Domo: Distinguished by its custom app development capabilities

The evaluation criteria included user-friendliness, cost-effectiveness, support quality, and key features such as real-time analytics, customization options, and collaborative data sharing.

That’s my take on it: Data visualization tools are essential for both business and academic purposes, offering powerful ways to analyze and present complex data. While the tools mentioned by Forbes are indeed popular for business intelligence, there are several excellent options for academics and other specialized purposes. For example,

·      SAS Visual Analytics on SAS Viya: General purposes

·      JMP Pro: General purposes

·      IBM Watson Studio: General purposes

·      MATLAB: Popular in engineering and scientific computing, providing robust visualization tools alongside computational capabilities.

·      Wolfram Mathematica: A powerful and comprehensive computational software system that offers extensive capabilities for data visualization, scientific computing, and statistical analysis.

·      Origin: Specifically designed for scientific graphing and data analysis, popular in physical sciences and engineering.

·      Gephi: An open-source tool particularly useful for network analysis and visualization, popular in social sciences and complex systems research.

·      Python with libraries like Matplotlib, Seaborn, and Plotly: Widely used in data science and research for its flexibility and powerful visualization options.

Factors for selecting visualization tools:

·      Data complexity and size: Tools like SAS Viya and IBM Watson Studio are better suited for very large datasets.

·      Statistical analysis needs: JMP and Python offer more advanced statistical capabilities.

·      Collaboration requirements: Cloud-based solutions like IBM Watson Studio may offer better collaboration features.

·      Domain-specific needs: Some fields may have preferred tools (e.g., Gephi for network analysis whereas Matlab and Mathematica for mathematics and engineering).

Link: https://www.forbes.com/advisor/business/software/best-data-visualization-tools/

Posted on November 1, 2024

A recent study conducted by the University of Toronto researchers found that in the long run use of Large Language Models (LLMs) may reduce human creativity in terms of divergent and convergent thinking. The study involved two large experiments with 1,100 participants to assess how different forms of LLM assistance affect independent creative performance. It was found that initially LLM assistance can enhance creativity during assisted tasks, but may hinder independent creative performance in subsequent unassisted tasks. Participants who had no prior exposure to LLMs generally performed better in the test phase, suggesting that reliance on LLMs could impair inherent creative abilities.

The effects of LLMs varied significantly between divergent and convergent thinking tasks. In divergent thinking, where participants needed to propose alternatives, they showed skepticism towards LLM assistance. Conversely, in convergent tasks, where participants were asked to narrow down diverse ideas to the final solution, they tended to accept LLM assistance. The study found that LLM-generated strategies could lead to a homogenization of ideas, where participants produced more similar outcomes even after ceasing LLM use. This effect was particularly pronounced in the divergent thinking tasks, raising concerns about the long-term impact on creative diversity.

That’s my take on it:

The findings from the University of Toronto study underscore a need to balance AI assistance with practices that actively cultivate our own creativity and critical thinking. To encourage creative independence, people should use AI as a tool to generate initial ideas or inspiration, but refine, expand, and adapt these ideas independently.

This ensures that AI serves as a starting point rather than the end goal, promoting your own creative engagement. As a professor, I will never accept any assignment directly output from AI. For divergent tasks, such as brainstorming, we should deliberately avoid using AI to prevent “homogenized” ideas. We should turn to a variety of resources and experiences for creative inspiration. Books, in-person conversations, physical exploration, and hands-on activities can all spark unique perspectives and insights that AI-generated suggestions may not provide.

Link to the research article: https://arxiv.org/abs/2410.03703

Link to video: https://drive.google.com/file/d/1z-zJXNYVzNo6_ZUe-T_DXGmN6yPG57GA/view?usp=sharing

Posted on October 25, 2024

Recently the mother of a 14-year-old boy who died by suicide after becoming deeply engaged with AI chatbots has filed a lawsuit against Character.AI, claiming the company’s technology manipulated her son, Sewell Setzer III. Megan Garcia, his mother, alleges that the AI chatbot app, marketed to children, exposed Sewell to "hypersexualized" and lifelike interactions that contributed to his mental distress. The lawsuit states that Sewell, who began using Character.AI's bots in April 2023, grew obsessed with personas based on characters from Game of Thrones, especially the Daenerys chatbot. This chatbot reportedly engaged in intimate, emotionally charged conversations with Sewell, including discussions on suicide. After expressing suicidal thoughts, Sewell allegedly received responses that reinforced these thoughts, leading up to his tragic death in February 2024.

Character.AI expressed condolences and emphasized recent updates, including safety features for users under 18 to reduce exposure to sensitive content and discourage prolonged usage. Garcia’s legal team claims that Sewell lacked the maturity to recognize the AI’s fictional nature and alleges that Google, due to its close ties with Character.AI, should also be held accountable. However, Google denies involvement in the development of Character.AI’s products.

That’s my take on it:

Currently, the field of AI remains largely unregulated, and this isn’t the first time Character.AI has faced allegations of unethical practices. Previously, it was discovered that Character.AI used the face of a deceased woman as a chatbot without her family’s consent, raising further ethical concerns. 

Regarding the current case, Character.AI has a duty to protect minors, especially from potentially manipulative or harmful interactions. Given Sewell’s young age and apparent emotional vulnerability, the chatbot's responses—particularly on topics like suicide—raise significant ethical concerns. AI systems marketed to the public should include stringent protections to prevent unintended harm, especially among younger or emotionally vulnerable users. Ethical AI involves ensuring users understand that they are interacting with a program, not a real person. Despite Character.AI’s disclaimer efforts, many users, especially younger ones, might still struggle to fully separate the AI from a genuine human connection. For minors, such “relationships” with virtual characters could create emotional dependency, as seen with Sewell and the chatbot he interacted with.

Links: https://futurism.com/character-ai-murdered-woman-crecente

https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791

Posted on October 20, 2024

Perplexity AI has recently introduced a new feature called Spaces, which allows users to upload their own files into a private space. The chatbot can then answer questions based on this locally stored content. Additionally, the AI system can generate FAQs, briefing documents, and more from the uploaded information. For Perplexity Pro (paid) users, the system will search across both the web and the internal files within their space, providing more customized and relevant responses. Spaces also offer a collaborative virtual environment, enabling users to invite others to join, share files, and tailor the AI assistant with specific instructions for various projects.

That’s my take on it:

This feature has numerous business applications. For instance, a company could upload all its tech support logs to improve its support chatbot. In higher education, professors can upload lecture slides and papers to create a personalized knowledge base for students, while course designers can use localized content to build unique courses. More importantly, the user can compare internal, national, and international data by asking questions such as, “How do our retention rates compare to the national rates over the past five years? Are our retention rates substantively better than the national average?” I encourage you to explore various possible applications of blending both global and local contents. The sky is the limit!

https://www.perplexity.ai/hub/faq/what-are-spaces

https://www.perplexity.ai/spaces

Posted on October 10, 2024

The 2024 Nobel Prize in Chemistry was awarded to three researchers for their groundbreaking work in protein structure prediction and design using artificial intelligence (AI):

·      Demis Hassabis (British): Co-founder and CEO of Google DeepMind

·      John M. Jumper (American): Director at Google DeepMind

·      David Baker (American): Professor of Biochemistry at the University of Washington

Hassabis and Jumper, working together at Google DeepMind, developed AlphaFold, an AI tool that revolutionized protein structure prediction. In 2020, AlphaFold solved a decades-old problem: predicting the three-dimensional structure of proteins from their amino acid sequences. Since then, AlphaFold has been used to predict the shapes of all known proteins, and their latest version, AlphaFold 3, extends these predictions to DNA, RNA, and small molecules like ligands, which are vital for drug discovery.

David Baker, meanwhile, pioneered AI tools for protein design and structure prediction, including the widely-used Rosetta family of tools. He also created ProteinMPNN, an open-source AI tool that aids researchers in discovering unknown proteins and designing new ones. Recently, Baker’s lab developed custom molecules capable of precisely targeting and eliminating disease-associated proteins in living cells.

That’s my take on it:

This award underscores the growing role of AI in scientific research, marking the second Nobel Prize awarded for AI-related work in 2024, following the Nobel Prize in Physics for neural network research. The recognition of AI in both chemistry and physics has ignited discussions about how computational methods are reshaping traditional scientific disciplines.

In response to the Nobel Prize in Physics, psychologist Gary Marcus raised concerns, noting that while Geoffrey Hinton has significantly influenced machine learning, it remains unclear how his work advanced physics specifically. He referenced Steve Hanson, arguing that the Nobel committee might not fully grasp the history of neural networks. However, Marcus sees the Chemistry Prize for Hassabis and Jumper as a clear and well-deserved win. This complicated issue, which involves the tension between the connectionist and the symbolic approaches to AI, necessitates deeper discussions.

Links:

https://garymarcus.substack.com/p/two-nobel-prizes-for-ai-and-two-paths?fbclid=IwY2xjawF1Ae1leHRuA2FlbQIxMQABHbfLGY2phlskY8GZ9wi8_llK4uWUQQDLk_5rWc584PvhOMpM6jnMvQ6JQg_aem_dx5P5dhnx1TGA25YYuEfWw

https://www.theguardian.com/science/2024/oct/09/google-deepmind-scientists-win-nobel-chemistry-prize

Posted on October 8, 2024

The Royal Swedish Academy of Sciences has awarded the 2024 Nobel Prize in Physics to John J. Hopfield and Geoffrey E. Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks." Often referred to as the "godfather of AI," Hinton’s contributions laid the groundwork for the machine learning systems that are transforming fields such as medical diagnostics by enabling faster and more accurate decision-making. His pioneering work has driven the rapid development of modern machine learning. The prize, worth 11 million Swedish kronor, will be shared equally between Hopfield and Hinton.

That’s my take on it:

Interestingly, Hinton is not a physicist but a cognitive psychologist and computer scientist. His contributions to AI date back to the 1970s. At that time, UCSD researcher David Rumelhart, who was trained in both psychology and mathematics, sought to improve the perceptron system by introducing the backpropagation algorithm into multilayer networks. This algorithm, also known as backprop, adjusts the weights of connections between neurons to minimize the difference between predicted and actual outputs. Initially, the algorithm failed when weights were set to zero, but Geoffrey Hinton’s insight saved the day. He suggested using random weights instead of zero, and once this change was implemented by the PDP research group, the algorithm began to function properly. The rest is history—and I’m proud of psychology’s role in it!

Link: https://www.nobelprize.org/prizes/physics/2024/press-release/

Posted on October 5, 2024

Meta has recently introduced Movie Gen, a cutting-edge AI-powered video generation tool that marks a major advancement in AI-generated media. Users can upload a photo of themselves and pair it with a text prompt to create a personalized video that captures human identity and movement. Movie Gen also allows users to edit existing videos or images through text commands, enabling precise modifications such as changing styles, transitions, or adding new elements. For instance, with the command “Transform the lantern into a bubble that soars into the air,” the lantern in the video seamlessly transforms into a bubble, with a strikingly realistic effect.

That’s my take on it:

As an experimental product, Movie Gen currently has limited functionality. It can produce high-definition videos up to 16 seconds long at 16 frames per second based on text prompts. For videos at the more standard 24 frames per second, the maximum duration is 10 seconds. While this falls short of the capabilities of OpenAI’s Sora in terms of video length, the trend is clear: conventional video production and content creation roles are likely to be disrupted by such advances. We cannot ignore this trend and do business as usual!

Link: https://ai.meta.com/research/movie-gen/

Posted on October 4, 2024

Last week, Google introduced its AlphaChip AI, a reinforcement learning-based method for designing chip layouts. AlphaChip aims to significantly reduce the time needed to create chip floorplans and enhance their performance, power efficiency, and area optimization. Typically, designing a floorplan for a complex chip, like a GPU, takes about 24 months if done manually by humans. Even for less complex chips, the process can still take several months, costing millions due to the large design teams involved. Google claims that AlphaChip can speed up this process, producing a chip layout in just a few hours.

However, independent researchers have recently raised concerns about whether Google has demonstrated that its AI can truly outperform expert human designers or existing commercial software. In 2023, Andrew Kahng, a professor at the University of California, San Diego, retracted his Nature commentary that initially praised Google’s work, noting that replication was necessary. After running a public benchmarking effort to replicate Google’s method, Kahng found that the AI did not consistently surpass human experts or conventional algorithms.

That’s my take on it:

Agreed! They should bring in an independent company to conduct a randomized experiment to validate their claims. Google is not alone. Cadence has developed AI-based tools like Cadence.AI, which helps automate PCB design and chip design flows. Similarly, NVIDIA has conducted research on using reinforcement learning for chip floor planning. While these AI-driven approaches show promise, their effectiveness compared to traditional methods is still being evaluated by the industry.

Link: https://www.newscientist.com/article/2450402-google-says-its-ai-designs-chips-better-than-humans-experts-disagree/

Posted on September 20, 2024

According to an article published in Nature on September 18, 2024, the rapid advancement of AI has shifted much of the cutting-edge research from academia to private industry, largely due to significant funding advantages. Specifically, a growing portion of AI research is no longer published in leading peer-reviewed scientific journals. The percentage of research articles with at least one industry co-author rose from 22% at top AI conferences in 2000 to 38% in 2020. Industry’s share of the largest and most capable AI models jumped from 11% in 2010 to 96% in 2021.

However, in China, academic institutions continue to play a crucial role in AI research and development. Between 2019 and 2021, China’s share in AI research more than doubled. The gap between the United States and China is narrowing rapidly, with six of the top AI research institutes now located in China (see the list below).

1 Chinese Academy of Sciences (China)

2 Harvard University (United States)

3 Beijing University (China)

4 Tsinghua University (China)

5 Zhejiang University (China)

6 Massachusetts Institute of Technology (United States)

7 Max Planck Society (Germany)

8 Helmholtz Association of German Research Centers (Germany)

9 University of Science and Technology of China (China)

10 Shanghai Jiao Tong University (China)

That’s my take on it:

Uncle Sam must allocate more AI research funding to universities. I need course releases and funding to hire research assistants!

Link: https://www.nature.com/articles/d41586-024-02985-3

Posted on September 20, 2024

Two days ago, China’s tech giant Alibaba Cloud announced its flagship AI product, Qwen 2.5, with models that appear to be highly competitive in the current landscape of large language models (LLMs). The Qwen 2.5 series offers a range of models with varying sizes, with parameters ranging from 0.5 billion to 72 billion. It was rained on a dataset of up to 18 trillion tokens. Alibaba asserts that their largest model, Qwen2.5-72B, outperforms competitors like Meta AI’s Llama-3.1-70B and Mistral-Large-V2 on benchmarks such as MMLU. Even smaller versions like Qwen2.5-14B and Qwen2.5-32B reportedly match the performance of larger models from other companies.

Since its introduction in April 2023, the Qwen model series has gained significant traction. Specifically, there are over 40 million downloads of Qwen models across platforms like Hugging Face and ModelScope. It inspired the creation of over 50,000 models on Hugging Face.

That’s my take on it:

While I admire Alibaba Cloud’s achievements, I am uncertain whether this high ranking position in AI is sustainable. First, many of China’s AI models rely on open-source frameworks developed by their U.S. counterparts. Should these U.S. companies choose to make their models proprietary, as OpenAI has done, it could pose challenges for China's AI progress. Second, regardless of how advanced the software becomes, AI systems are still heavily dependent on high-performance hardware, particularly GPUs. The current U.S. restrictions on the sale of advanced GPUs to China significantly hinder its AI development. Furthermore, as with any new AI model, independent verification by researchers and organizations is necessary to validate performance claims. While benchmarks can provide insights, they don’t always reflect real-world performance across various applications.

Link: https://the-decoder.com/qwen-2-5-alibabas-new-ai-models-challenge-the-competition/

Posted on September 13, 2024

Yesterday (September 12, 2024) OpenAI released the o1 series of models for ChatGPT, introducing two versions: o1-preview and o1-mini. This launch marks a significant step in AI development, particularly for complex reasoning tasks.

Overview of o1 Models

The o1 models are designed to enhance reasoning capabilities, allowing the AI to spend more time "thinking" before responding. This approach aims to improve performance on intricate problems, especially in fields like science, coding, and mathematics. For instance, in evaluations, the o1 model scored 83% on an International Mathematics Olympiad qualifying exam, compared to just 13% for its predecessor, GPT-4o. Further, the o1 model reached the 89th percentile in Codeforces competitions, indicating its strong capabilities in programming. Further, the o1 models utilize a multi-step reasoning process, similar to human problem-solving, which helps in breaking down complex tasks.

That’s my take on it:

Several YouTubers tested the o1 model with trick questions like “after three hours, how many pieces of ice will remain in my hand?” and “how many letters ‘r’ are in the word ‘strawberry’?” Unlike its predecessor, the new model successfully provided the correct answers. Moreover, the o1 model has significantly improved in scientific reasoning and computation. As noted earlier, the o1-preview model achieved 83% accuracy on the International Mathematics Olympiad (IMO) qualifying exam and ranked in the 89th percentile in Codeforces competitions. This could help students enhance their analytical and programming skills, potentially democratizing access to advanced STEM education. We cannot ignore this trend. Educators have to be proactive!

Link: https://openai.com/o1/

Posted on September 10, 2024

On September 5, 2024, the United States, United Kingdom, European Union, and several other countries signed the world's first legally binding international treaty on artificial intelligence, called "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law". This treaty was developed by the Council of Europe and aims to regulate the use of AI while promoting responsible innovation, which aims to establish a "Conference of Parties" to promote cooperation and information exchange among signatories. The treaty applies to all AI systems except those used in national security or defense, though it still requires these activities to respect international laws and democratic principles. It requires risk monitoring, documentation, and testing of AI systems.

That’s my take on it:

As the name implies, it sets forth seven key AI principles for implementation by signatories, focusing on protecting human rights, democracy, and the rule of law. The treaty emphasizes that AI systems must align with human rights principles and uphold democratic values. Based on the current geopolitical landscape and the principles emphasized in this treaty, it's unlikely that semi-democratic and non-democratic nations like Russia and North Korea would join this particular AI agreement. In the future there might be a two-tiered system of AI in the international arena: 1. Democratic bloc: Led by democracies, emphasizing human rights and democratic values in AI development and deployment. 2. Authoritarian bloc: focusing more on state control and national security concerns in AI governance. We've already seen similar divides in other technology governance areas, such as internet regulation and data privacy laws. The AI treaty situation may follow a similar pattern of fragmentation.

Link: https://www.reuters.com/technology/artificial-intelligence/us-britain-eu-sign-agreement-ai-standards-ft-reports-2024-09-05/

Posted on September 5, 2024

Elon Musk recently announced a major milestone in a Monday post on X, revealing that xAI's system, known as Colossus, has come online. xAI, which Musk launched last year to compete with OpenAI, develops large language models called Grok. In May, the company raised $6 billion at a $24 billion valuation to fuel its AI development. Currently Colossus incorporates 100,000 Nvidia H100 GPUs, and is planned to expand to 200,000 GPUs, including 50,000 of the newer H200 models, in the coming months.

In his post, Musk described Colossus as the “most powerful AI training system in the world,” implying that it surpasses the U.S. Energy Department’s Aurora system. According to Silicon Angel, currently Aurora is ranked as the world’s fastest AI supercomputer.

That’s my take on it:

Based on the most recent information from the Top500 list, the U.S. Department of Energy's Aurora system is not currently the world's fastest supercomputer. Rather, it is the second fastest one. Frontier, located at Oak Ridge National Laboratory, remains the world's fastest supercomputer with a performance of 1.206 exaflops.

While Colossus represents a significant advancement in AI computing, at most it outperforms the world’s second fastest supercomputer. In addition, the absence of standardized performance metrics or direct comparisons makes Musk's claim difficult to verify. Furthermore, the fast-paced nature of AI development means such statements can quickly become outdated as technology progresses.

Links: https://top500.org/lists/top500/2024/06/

https://siliconangle.com/2024/09/03/elon-musks-xai-launches-colossus-ai-training-system-100000-nvidia-chips/

Posted on September 4, 2024

According to the August 2024 edition of the TIOBE index, Python remains the most popular programming language, achieving a ranking of over 18% for the first time in its history. The last language to surpass 18% was Java in November 2016. Notably, SQL ranks 7th, R is in 19th place, and SAS occupies the 27th spot.

That’s my take on it:

Python's broad utility as a general-purpose language, suitable for both data analytics and web development, contributes to its well-deserved popularity. In contrast, SQL, R, and SAS are more specialized, focusing primarily on data extraction and analytics. While SQL is not open source, it is a standardized language implemented across various systems, such as MySQL, Microsoft SQL Server, Oracle, and SAS’s PROC SQL, which helps maintain its higher ranking. SAS, being proprietary software, still outperforms other proprietary tools like SPSS, which does not make it into the Top 50.

Link: https://www.tiobe.com/tiobe-index/ 

Posted on August 30, 2024

For several years, Apple has been using Applebot to improve Siri and provide Spotlight suggestions. More recently, it has expanded its use to train Apple Intelligence. In the summer of 2024, Apple introduced Applebot-Extended, giving websites the option to opt out of having their data used for AI training. According to a recent Wired magazine report, several major news and social media platforms, including The New York Times, Facebook, Instagram, Craigslist, Tumblr, Financial Times, The Atlantic, USA Today, and Conde Nast, have chosen to opt out of Apple’s training program. Currently, around 6% to 7% of high-traffic websites are blocking Applebot. Additionally, data journalist Ben Welsh’s recent analysis found that over 25% of websites (294 out of 1,167), which are primarily English-language and U.S.-based news sites, are blocking Applebot-Extended.

 

That’s my take on it:

Currently there are two opposing trends at play. While some major publishers have struck deals with OpenAI to allow their content to be included in large language models, others are resisting by opting out and even taking legal action against AI companies to protect their intellectual property. It’s ironic that many people criticize AI for bias, but this issue is intrinsic and unlikely to be resolved soon, especially when some information is excluded from AI training. Just as bias in analysis can arise from self-selected samples, self-selection of AI training data will also result in biased outcomes.

Posted on August 30, 2024

A recent report from Maximize Market Research predicts that demand for data visualization tools will grow at a compound annual rate of 11.6 percent through 2030. Specifically, the global data visualization software market, valued at USD 9.55 billion in 2023, is expected to reach USD 21.70 billion by 2030. This demand is rising sharply in key sectors such as finance, healthcare, and retail, with significant interest from global financial hubs like New York, London, and Singapore. Leading companies in the field of data visualization include Salesforce (Tableau), Microsoft (Power BI), and IBM.

That’s my take on it:

When I was a student, data visualization was “marginalized.” Most publishers restricted the number of statistical graphs due to the high cost of printing, and statisticians preferred confirmatory data analysis to exploratory data analysis by data visualization. Today, the situation has completely changed. Data visualization is now an integral component of data analytics. With the availability of big data, analysts can build predictive models based on the visual trends and patterns of the data at hand, rather than relying on inferring from a small sample to a larger population. Seeing is believing!

Link: https://www.prnewswire.co.uk/news-releases/data-visualization-tools-market-expected-to-grow-at-a-cagr-of-11-6-percent-reaching-usd-15-80-billion-by-2030--302233051.html

Posted on August 23, 2024

OpenAI and Condé Nast recently announced a partnership that will allow ChatGPT and its search engine, SearchGPT, to feature content from popular publications like Vogue, The New Yorker, and GQ. Other major media outlets, including Time Magazine, the Financial Times, and the Associated Press, have also partnered with the AI firm. However, some media companies, such as The New York Times and the Chicago Tribune, have resisted this move and have taken legal action to protect their content, arguing that AI models trained on their material without permission may infringe on their intellectual property rights.

That’s my take on it:

This partnership between OpenAI and Condé Nast marks a significant shift in how publishers are engaging with AI and large language models (LLMs). The trend of collaboration is likely to grow, as more publishers recognize the potential benefits of having their content included in LLMs. When users search for information through platforms like Bing, SearchGPT, and Perplexity, they are likely to follow links to the original sources, providing publishers with valuable exposure. By allowing their content to be used in AI chatbots and search tools, publishers can reach a broader audience, potentially increasing traffic to their websites. As AI-driven information retrieval becomes more widespread, publishers who choose not to participate may risk losing visibility and relevance in a landscape where audiences increasingly rely on these tools for information.

Link: https://www.bbc.com/news/articles/cpqjvl9z9w1o

Posted on August 16, 2024

On August 13, xAI, Elon Musk's AI company, launched beta versions of two new language models: Grok-2 and Grok-2 mini. These models are available to subscribers of X (formerly Twitter), Musk's social media platform. The release also includes integration with Flux, a recently unveiled image synthesis model that allows X users to generate largely uncensored photorealistic images for sharing on the platform.

The Grok models can reportedly produce a wide range of controversial images, including:

·      Celebrities in provocative situations (e.g., Taylor Swift in lingerie)

·      Political figures in violent or compromising scenarios (e.g., Obama threatening Biden with a knife)

·      Beloved characters engaged in graphic acts (e.g., Mickey Mouse killing people with a machine gun)

·      Public figures in intimate or illicit contexts (e.g., Trump and Kamala kissing each other, Bill Gates using illegal drug)

The above examples are what I saw on the Internet. Please do not Google for them if you think you might be offended by these images.

That’s my take on it:

Grok’s approach contrasts sharply with many existing generative AI art tools like Midjourney, Adobe Firefly, and DALL-E. These platforms typically employ preemptive filtering, which is often more restrictive than AI moderation on social media. While social media platforms generally react to content after it's shared and a violation is detected, these AI art tools aim to prevent potentially problematic content from being created at all.

In response, artists have complained that this may limit artistic expression and creativity.

Grok's uncensored approach represents the opposite extreme, raising concerns about potential misuse by malicious actors. Finding a balance between creative freedom and responsible content moderation in AI-generated media has been an ongoing debate. If you are interested in exploring this topic, please take my class DSCI 6400 AI and data ethics.

Link: https://arstechnica.com/information-technology/2024/08/musks-new-grok-upgrade-allows-x-users-to-create-largely-uncensored-ai-images/

Posted on August 16, 2024

As you may already know, SAS Institute has been offering free access to SAS OnDemand (SAS Studio) for nearly a decade. Recently, they have also made JMP Pro—an advanced, intuitive tool for data mining and visualization—available to faculty and students at no cost. You can download the installer using the following link:

https://www.jmp.com/en_us/academic/jmp-student-edition.html

On the webpage you will notice it is referred to as “JMP Student” instead of “JMP Pro.” Nevertheless, it offers the same platforms and features as JMP Pro, exclusively for faculty and students. Please be aware that you need to register with your university email (.edu) in order to be eligible.

That’s my take on it:

Classical statistics, also known as confirmatory data analysis, involves testing a predetermined hypothesis to reach a binary decision. In the 1960s and 1970s, John Tukey pioneered exploratory data analysis (EDA), which is often considered a forerunner to data science, as both focus on uncovering patterns and insights. The 1980s saw the introduction of tools like DataDesk and JMP, designed to support data exploration and pattern recognition (with DataDesk ceasing operations in June 2024). However, many data analysis software packages today remain non-interactive, providing only static output. JASP, a GUI-based version of the R language, is a semi-dynamic system at best. Currently, several fully interactive systems are available, including SAS Viya, JMP Pro, and Tableau. I highly recommend JMP for its contextual menu system, extensive features, and user-friendliness. Plus, it’s free!

Posted on Aug. 9, 2024

Recently, Anaconda, a platform that supports open-source computing for languages like Python and R, announced the enforcement of its revised terms of service, now requiring research and academic organizations to pay for software that was previously free. The updated license agreement, effective March 2024, mandates that organizations with more than 200 employees must purchase a Business or Enterprise license. Anaconda made it clear: they have employees to support.

Last week, Mass General Brigham (MGB), a non-profit hospital and research organization in Boston, Massachusetts, issued a critical notice to potential Anaconda users, advising them that they have until the end of August to secure a license. MGB stated that Anaconda will no longer be available in the general software repositories of its HPC cluster, and researchers needing the software must purchase a license.

That’s my take on it:

I saw this coming. While the concept of open source, akin to socialism, is noble, it’s unrealistic to expect software developers to work for free indefinitely. As Anaconda emphasized, they have employees who need to be paid to sustain their livelihood. In addition, while open-source software may have lower initial costs, the total cost of ownership includes maintenance, support, potential customization, and user interface. We must weigh these ongoing costs against the often higher upfront costs of proprietary software. More importantly, most open source packages do not offer any graphical user interface, for me typing rather than using drag-and-drop and point-and-click is going backwards.

Link: https://www.theregister.com/2024/08/08/anaconda_puts_the_squeeze_on/

Posted on Aug. 6, 2024

Recently Google's Olympics-themed advertisement for its Gemini AI product ignited controversy. The ad portrayed a young female runner whose father uses Gemini AI to help her write a letter to her idol, American track star Sydney McLaughlin-Levrone. The father's prompt to the AI was:

"Help my daughter write a letter telling Sydney McLaughlin-Levrone how inspiring she is and be sure to mention that my daughter plans on breaking her world record ... one day (She says sorry, not sorry"

This commercial faced significant backlash, with critics arguing that excessive use of generative AI tools could deprive children of valuable learning experiences. Many questioned why the child didn't write the letter herself or with her father's guidance. At the end Google decided to end the ad.

That’s my take on it:

“Once the genie is out of the bottle, you can't put it back in.” AI is here to stay. AI has become an integral part of our daily lives. This scenario raises concerns about the slippery slope of AI usage in education. There's a risk that students might progress from seeking AI assistance to relying on AI to generate entire papers and complete assignments. Furthermore, this behavior could potentially carry over into their professional lives, with some fearing that these future academicians might use AI to produce fraudulent research papers, and furfure artists/photographers might overly rely on AI and lose mastery of basic skills. So, are we opening a Pandora box? We need research on this!

Links: https://www.prdaily.com/google-pulls-ai-ad-after-backlash-from-olympics-audiences/

https://www.youtube.com/watch?v=NgtHJKn0Mck&t=45s

Posted on Aug. 2, 2024

“Don’t cry for me, Argentina!”

Recently, Argentina announced plans to use AI to predict and prevent future crimes. A new unit within the Ministry of Security will be responsible for conducting drone surveillance, monitoring social media, and employing facial recognition to enhance security measures. This unit will also utilize machine learning algorithms to analyze historical crime data to forecast and thwart potential criminal activities. The ministry cited the United States, China, Israel, and other countries as pioneers in integrating AI into security operations.

However, human rights groups have raised concerns that these measures could infringe on freedom of expression, as individuals may self-censor out of fear that their social media posts will be monitored by the government. The Argentine Center for Studies on Freedom of Expression and Access to Information pointed out that such technologies have historically been used to profile academics, journalists, politicians, and activists. They have called for transparency regarding the origin and specific use of these technologies, emphasizing that a lack of accountability is alarming.

That’s my take on it:

The tension between privacy and security is ever-present. On one hand, we strive to protect our privacy and resist the collection of our personal data by governments and corporations. On the other hand, we desire a safe society where crime can be prevented and perpetrators apprehended. Achieving this balance requires an effective monitoring and tracking system along with a comprehensive database. Interestingly, in 2019, San Francisco banned city agencies, including the police department, from using facial recognition technology. Yet, five years later, San Francisco residents voted to allow the police to use more surveillance technology to combat crime. Transparency and accountability serve as the mediators between these two extremes. These topics will be central to my upcoming data ethics class (DSCI 6400).

Link: https://www.cbsnews.com/news/argentina-plans-to-use-ai-to-predict-future-crimes-and-help-prevent-them/#:~:text=Argentina%20announced%20plans%20last%20week,Intelligence%20Unit%20Applied%20to%20Security.

Posted on July 31, 2024

Meta, Facebook’s parent company, will pay Texas $1.4 billion to settle a lawsuit alleging unauthorized use of personal biometric data. The 2022 lawsuit, filed by Texas Attorney General Ken Paxton, claimed Meta had used facial recognition software on photos uploaded to Facebook without Texans' consent. This is the largest legal settlement ever obtained by a single state and the largest privacy-related settlement secured by a state attorney general. The settlement will be paid over five years.

In 2011, Meta introduced a feature called Tag Suggestions to help users tag people in their photos more easily. According to Paxton’s office, this feature was enabled by default and used facial recognition on users' photos, capturing data protected by the 2009 law. Meta discontinued this system in 2021, deleting over 1 billion individuals' facial recognition data. Paxton is pursuing a similar case against Google owner Alphabet regarding biometric data.

That’s my take on it:

I am not defending Meta or Alphabet. However, it is important to note that the US currently lacks a comprehensive federal law regulating the use of biometric data, including facial recognition technology. Regulation varies from state to state, with some states enforcing strict laws while others have none. For instance, California's California Consumer Privacy Act (CCPA) offers some protections for biometric data within broader consumer data privacy laws. This fragmented regulatory landscape presents significant challenges for tech companies like Meta in ensuring compliance across all jurisdictions. To address this, tech companies must constantly monitor state law changes and may need a dedicated legal and compliance team to track legislative developments and ensure adherence to all state laws. Establishing a federal law could be a more efficient and fairer approach.

Meta, Facebook’s parent company, will pay Texas $1.4 billion to settle a lawsuit alleging unauthorized use of personal biometric data. The 2022 lawsuit, filed by Texas Attorney General Ken Paxton, claimed Meta had used facial recognition software on photos uploaded to Facebook without Texans' consent. This is the largest legal settlement ever obtained by a single state and the largest privacy-related settlement secured by a state attorney general. The settlement will be paid over five years.

In 2011, Meta introduced a feature called Tag Suggestions to help users tag people in their photos more easily. According to Paxton’s office, this feature was enabled by default and used facial recognition on users' photos, capturing data protected by the 2009 law. Meta discontinued this system in 2021, deleting over 1 billion individuals' facial recognition data. Paxton is pursuing a similar case against Google owner Alphabet regarding biometric data.

That’s my take on it:

I am not defending Meta or Alphabet. However, it is important to note that the US currently lacks a comprehensive federal law regulating the use of biometric data, including facial recognition technology. Regulation varies from state to state, with some states enforcing strict laws while others have none. For instance, California's California Consumer Privacy Act (CCPA) offers some protections for biometric data within broader consumer data privacy laws. This fragmented regulatory landscape presents significant challenges for tech companies like Meta in ensuring compliance across all jurisdictions. To address this, tech companies must constantly monitor state law changes and may need a dedicated legal and compliance team to track legislative developments and ensure adherence to all state laws. Establishing a federal law could be a more efficient and fairer approach.

Link:  https://www.cnbc.com/2024/07/30/meta-agrees-to-1point4-billion-settlement-in-texas-biometric-data-lawsuit.html

Posted on July 26, 2024

A recent research article published in Nature warns that the indiscriminate use of model-generated content in training can cause irreversible defects in the resulting models, leading to the disappearance of original ideas and content. This phenomenon is known as "model collapse." The study shows that when AI-generated output is used to train AI, original content is replaced by unrelated nonsense within a few generations. In contrast, tech companies that rely on human-generated content may be able to train more effective AI models compared to their competitors.

That’s my take on it:

This warning is warranted. Just as close relatives marrying each other can result in biologically defective offspring due to inbreeding, training AI systems with AI-generated output creates a closed system that resembles inbreeding. This closed system leads to a degradation of quality and originality. While it is advisable for AI developers to incorporate new information generated by humans, the challenge is that many people today create papers, images, and other products with the aid of AI.

Link: https://www.nature.com/articles/s41586-024-07566-y

 

Posted on July 26, 2024

According to a recent study by The Upwork Research Institute, while 96% of C-suite executives expect AI to boost productivity, 77% of employees using AI report that it has increased their workload and created challenges in achieving the anticipated productivity gains. Not only is AI adding to the workloads of full-time employees, but it is also hampering productivity and contributing to employee burnout. Additionally, nearly half (47%) of employees using AI say they are unsure how to achieve the productivity gains their employers expect, and 40% feel their company is demanding too much of them when it comes to AI. A significant majority of global C-suite leaders (81%) admit to increasing demands on their workers over the past year. As a result, 71% of full-time employees are experiencing burnout, and 65% report struggling with their employer’s productivity demands.

That's my take on it:

To me, these results are not surprising. This paradox has been recurring since the advent of computing technology. Many years ago, when I wrote research papers using a manual typewriter and referenced hard copy books from the library, I always managed to meet deadlines. However, with the introduction of word processors and search engines, many manuscripts underwent endless revisions due to the ease of obtaining resources and making changes, leading to missed deadlines. Similarly, it is tempting to explore numerous possibilities generated by AI systems, which can ultimately decrease productivity. To use AI wisely, I set limits for myself. For instance, when using AI art tools to generate images for my website or lecture PowerPoints, I generate no more than 10-15 images instead of hundreds.

 

Link:

https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/

Posted on July 20, 2024

On July 17, King Charles III addressed the UK Parliament, announcing upcoming legislation to regulate artificial intelligence under the new Labor government. While specific details were not provided, the King emphasized that the government intends to establish requirements for developers working on the most advanced AI models. This approach may signal a shift from the voluntary commitments favored by the previous Conservative administration. The UK has historically adopted a more hands-off stance on AI regulation compared to the European Union's comprehensive AI Act.

The former Conservative government championed a "pro-innovation" strategy, focusing on research and guidelines. This included substantial investments in an AI Safety Institute, public sector supercomputers, and other central resources to guide various agencies in governing AI within their respective domains. However, the Labor government appears to be charting its own course on AI policy, potentially moving towards more formal regulatory measures.

That’s my take on it:

Regulatory approaches can significantly impact innovation. Overly restrictive or inflexible regulations may impede progress, especially when they fail to keep pace with rapidly evolving technologies or create obstacles for new market entrants. Conversely, thoughtfully crafted regulations can stimulate innovation by establishing clear guidelines, ensuring safety standards, protecting consumer interests, and fostering healthy competition. The U.S. government's hands-off approach to Internet regulation in the 1990s is often credited with facilitating the rapid growth and innovation of the early digital era. This light-touch strategy allowed for experimentation and swift technological advancements. In recent years, however, the digital landscape has evolved dramatically, leading to increased calls for regulation in areas such as data privacy, data sharing, data ownership, cybersecurity, and the influence of major tech corporations. These demands reflect the changing nature and challenges of the modern Internet ecosystem.

Artificial Intelligence presents a unique regulatory challenge. Unlike the early Internet, AI's potential for far-reaching and unintended consequences necessitates a more proactive approach. The power and complexity of AI systems demand careful consideration of checks and balances to mitigate potential misuse and ensure responsible development.

Link: https://www.euractiv.com/section/artificial-intelligence/news/king-charles-confirms-uk-ai-bill-is-coming-but-details-yet-to-be-figured-out/

Posted on July 12, 2024

On July 10, 2024, OpenAI announced an important partnership with Los Alamos National Laboratory (LANL), a premier U.S. national research facility. This collaboration aims to explore the safe application of artificial intelligence in scientific laboratory settings, with a focus on advancing bioscientific research. The partnership centers on an evaluation study involving OpenAI and LANL's Bioscience Division. They will assess how advanced AI models like GPT-4o can assist human researchers in physical laboratory environments, leveraging multimodal capabilities such as vision and voice recognition. The study includes biological safety evaluations for GPT-4o and its unreleased real-time voice systems, examining their potential to support bioscience research.

That’s my take on it:

This collaboration marks a notable advancement from OpenAI's previous government partnerships. While earlier initiatives, such as the 2023 partnership with the State of Pennsylvania for a generative AI pilot program, focused on applying existing technology in state government operations, the LANL partnership represents a more profound engagement in scientific research. First, unlike the previous application-oriented program partnered with the State of Pennsylvania, this collaboration is fundamentally a research initiative. Second, LANL is a federally funded research and development center (FFRDC) supported by the U.S. Department of Energy (DOE), elevating this partnership to a national level.

LANL's prestigious history includes contributions to major scientific breakthroughs, from the development of the atomic bomb to the discovery of the neutrino and advancements in high-performance computing. The integration of cutting-edge AI technologies through this partnership could potentially catalyze new scientific discoveries and innovations.

Link: https://openai.com/index/openai-and-los-alamos-national-laboratory-work-together/

Posted on July 5, 2024

In an article titled "Is Data Science Still Worth It In 2024?" posted on KDNuggets on July 3, 2024, the authors explored the job prospects for data scientists. Opinions about the role of data scientists in 2024 are varied. With the increasing use of generative AI tools that assist organizations and employees in data analysis, some argue that data science is obsolete, while others believe it is experiencing a renaissance. It's true that many non-tech companies have adopted AI tools, like ChatGPT, to automate data analytics, reducing the need for data scientists. However, in tech-driven sectors such as financial risk analytics, data engineering, and customer analytics, the demand for data science skills remains strong. In fact, the U.S. News & World Report ranked data science as the 4th best technology job, the 7th best STEM job, and the 8th best job overall out of 100 in 2024.

That’s my take on it:

The challenge of automation is not new. During the 1970s and 1980s, the automation of manufacturing in Japan, especially in the automotive industry, had a complex impact on employment. Initially, concerns arose about significant job losses due to automation. Some low-skilled manufacturing jobs were indeed eliminated as robots took over repetitive tasks. However, new jobs emerged in engineering, robotics, and related technical fields. Specifically, mechanical engineers were needed to design and maintain robotic systems, electrical and electronic engineers developed control systems, software engineers programmed and maintained automated systems, and industrial engineers optimized the integration of automation into production lines. Following this pattern, I believe that AI automation will actually increase the demand for data scientists, rather than reduce it.

Links: https://money.usnews.com/careers/best-jobs/data-scientist

https://www.kdnuggets.com/is-data-science-still-worth-it-in-2024

Posted on June 30, 2024

A recent study published in PLOS ONE by University of Reading researchers has revealed that AI-generated answers consistently outperformed human students in undergraduate psychology exams. The study utilized ChatGPT to generate answers for 33 fictitious student profiles across various module exams. The findings were alarming:

·      AI-written essays averaged half a grade higher than human submissions.

·      There was an 83.4% probability that AI submissions would surpass a random selection of an equal number of genuine student submissions across modules.

·      94% of AI-generated essays went undetected by exam graders.

These researchers suggest a need to reassess examination methods, possibly reverting to traditional in-person exams or developing new strategies to verify the authenticity of student work.

That’s my take on it:

Academic integrity violations have long been a concern in academia, with tools like SafeAssign and Turnitin developed to detect Plagiarism. While AI complicates the detection of cheating, emerging technologies are rising to meet this challenge. For instance, QuillBot now offers features capable of identifying AI-generated content in multiple languages, including English, German, French, and Spanish. It's anticipated that more AI-enabled tools will be developed in the near future to combat AI-based cheating effectively.

Link:  https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0305354

Posted on June 21, 2024

On June 19, InsideBigData released its quarterly IMPACT 50 List, highlighting the most influential companies in AI and big data. The selection process involved analyzing an extensive data set of vendors and related industry metrics. InsideBigData utilized machine learning to optimize the ranking for this list. The following are the top 20. Please read the original article to see the full list.

1.        Nvidia

2.        OpenAI

3.        Google AI and DeepMind

4.        Hugging Face

5.        Amazon Web Services

6.        Microsoft AI

7.        Databricks

8.        Snowflake

9.        Meta AI

10.  Intel AI

11.  IBM

12.  AMD

13.  H20.ai

14.  SAS

15.  Mistral AI

16.  Neural Magic

17.  Anthropic

18.  Neo4j

19.  Anaconda

20.  TigerGraph

That’s my take on it:

While many companies' rankings are well-deserved, there are some notable observations. First, Apple is absent from the top 50, with its Machine Learning Research lab only receiving an honorable mention. It remains uncertain whether the release of Apple Intelligence will change this, but Apple has lagged in AI research for a long time. Second, SAS ranked five positions higher than Anaconda, a distribution for Python and R programming languages, despite the popularity of open-source. Third, several traditional computing giants, including IBM and Intel, are among the top 20 but face challenges. IBM's focus on enterprise-grade, trustworthy AI solutions gives it a unique market position, yet it seems to overlook the consumer market. Additionally, IBM SPSS Statistics needs modernization. Needless to say, Intel is very behind in the development of GPU, which is indispensable to high-power computing.

Link: https://insidebigdata.com/2024/06/19/the-insidebigdata-impact-50-list-for-q3-2024/

Posted on June 19, 2024

CivitAI, the world's largest repository for Stable Diffusion resources, recently announced a temporary ban on all usage related to Stable Diffusion Version 3 (SD3) due to potential legal issues with its licensing. The primary concern lies in the ambiguous terms of Stability AI's Non-Commercial Research Community License. While previous versions of Stable Diffusion followed an open-source model, the new SD3 license restricts users to generating no more than 6,000 images per month. For higher usage an Enterprise License is required. There are concerns that if Stability AI's rights are transferred to another entity, the new owner might enforce the license more strictly, potentially demanding high fees or removing the model.

That’s my take on it:

While the idea of the open-source model is noble, it may not be sustainable in the long term. OpenAI, for instance, started as an open-source developer but has since become commercialized. Developing AI models requires substantial financial support, making it challenging, if not impossible, to sustain them for free. Therefore, Stability AI's move to monetize through Enterprise Licensing is understandable. Interestingly, many companies like DreamStudio, Clipdrop, and TensorArt have built commercial services around Stable Diffusion using the core Stable Diffusion model provided for free under an open-source license. These companies typically offer free accounts with optional premium features for a fee. While these services add convenience and additional features, their monetization raises questions about the fairness of profiting from an open-source model created for the AI community's benefit. If those companies can make a profit using SD, is it fair for Stability AI to ask them to pay?

Link: https://civitai.com/articles/5732/temporary-stable-diffusion-3-ban

Posted on June 11, 2024

At WWDC 2024 this Monday, Apple introduced "Apple Intelligence," a new suite of AI features for the iPhone, Mac, and other Apple’s devices. Starting later this year, Apple will roll out a more conversational Siri, custom AI-generated "Genmoji," and access to GPT-4o, which allows Siri to utilize OpenAI's chatbot for complex queries. These AI features will be exclusive to the iPhone 15 Pro and 15 Pro Max, as well as iPads and Macs with M1 or later chips, and will be available in English only. The launch will begin this fall with iOS 18, iPadOS 18, and macOS Sequoia.

That’s my take on it:

Apple has clarified that these AI features require at least an M1 chip, excluding all Intel-based Macs. This exclusion is due to the older Intel chips lacking the dedicated Neural Processing Unit (NPU) necessary for efficiently running AI workloads on-device. This decision has led to complaints and confusion from Intel Mac users, who feel that their relatively recent and capable devices are being unfairly left out. However, Apple likely made this choice to encourage the adoption of its own Apple Silicon chips. Enabling AI features on Intel Macs could potentially impact the sales of the newer M-series Macs. While this may seem unfair to users with Intel-based computers, there is little that can be done.

Link: https://www.youtube.com/watch?v=ixVhFVZ-pl0

Posted on June 7, 2024

On May 27, Elon Musk, CEO of xAI and Tesla, sparked a debate about the nature of science. The debate centers on this question: If you conduct scientific research but don’t publish it, is it still considered science? Musk stated, “Join xAI if you believe in our mission of understanding the universe, which requires maximally rigorous pursuit of the truth, without regard to popularity or political correctness.” Yann LeCun, the chief AI scientist known for inventing convolutional neural networks (CNN), countered Musk’s idea, saying, “If you do research and don’t publish, it’s not science.” He emphasized that research only becomes ‘science’ when it is collected as a body of knowledge, tested for correctness and reproducibility, and then published. Others noted that scientific experiments conducted within companies are often kept private, and even outside the private sector, 40% of data from academic and government scientists remains unpublished. For instance, AlphaFold, the protein-structure-prediction tool created by Google DeepMind, was developed and launched without formal publication.

That’s my take on it:

Indeed, some notable scientific breakthroughs or discoveries in recent history did not initially follow the traditional path of being published in peer-reviewed academic journals. One example is the development of mRNA vaccine technology, which was crucial in the rapid creation of COVID-19 vaccines by companies like Pfizer and Moderna. While the underlying research on mRNA had been published, the specific application and production methods for the COVID-19 vaccines were first shared through company press releases and regulatory filings rather than academic publications. Another example is the first working laser developed by Theodore Maiman in 1960. Maiman, a researcher at Hughes Research Laboratories, announced his breakthrough through press releases and technical reports rather than a peer-reviewed publication initially.

Link: https://www.nature.com/articles/d41586-024-01626-z

Posted on May 31, 2024

WIRED magazine has launched the AI Elections Project, aimed at monitoring the use of generative AI in over 60 countries' elections scheduled for 2024. The project has uncovered alarming instances of AI-generated content being used for political propaganda and misinformation. In India and Indonesia, deepfake videos of deceased leaders have surfaced, appearing to endorse their political successors. Shockingly, in South Africa, the rapper Eminem's likeness has been used to endorse opposition parties without his consent. Even more concerning, a deepfake of President Joe Biden has been circulating, urging voters in New Hampshire to stay home on election day.

These examples are just the tip of the iceberg, as the project continues to uncover AI-generated speeches, tailored disinformation campaigns, and other forms of synthetic propaganda. The information landscape has been drastically altered, with the potential for amplified misinformation and its implications on global politics.

WIRED's AI Elections Project maintains a regularly updated map and list, documenting instances of generative AI use in elections worldwide. Detailed information, including involved companies and platforms, is provided for each occurrence. Through this initiative, WIRED aims to understand and document this historic intersection of technology and democracy, recognizing the urgent need to address the challenges posed by synthetic media and its potential to undermine the integrity of electoral processes.

That’s my take on it:

The potential for generative AI to produce highly convincing and targeted disinformation poses a significant threat to the integrity of elections. Nonetheless, there are countermeasures that can be taken to alleviate the problem:

1.        Increased AI literacy and public awareness campaigns to help people identify deepfakes and AI-generated content. For example, AIandYou, a nonprofit organization dedicated to promoting AI education among underrepresented communities, launched a campaign in early 2024 that will intentionally incorporate AI-generated misinformation. The objective is to familiarize voters with the potential risks and characteristics of synthetic media content, particularly in the context of electoral processes.

2.        Developing and implementing robust detection methods to automatically flag synthetic media. Investing in independent, well-resourced fact-checking initiatives to rapidly verify claims and counter false narratives. Interestingly, we need AI to detect signs of unnatural biological cues, artifacts, and inconsistencies in AI-generated video. Organizations like Full Fact in the United Kingdom and Newtral in Spain have taken steps to integrate advanced AI models, such as BERT, into their fact-checking processes.

3.        Stronger regulations and accountability measures for platforms/entities spreading disinformation, particularly around election periods. Ensuring transparency from tech companies about their use of generative AI systems. The Federal Trade Commission (FTC) has taken steps to scrutinize companies that engage in unfair or deceptive practices related to artificial intelligence (AI). This could be extended to the political arena.

Link: https://www.wired.com/story/generative-ai-global-elections/

Posted on May 27, 2024

The rise of AI-centric programming languages is reshaping the landscape of AI development. The 1970s and 1980s witnessed a golden era for such languages like LISP, which introduced pioneering concepts like symbolic processing and logic programming. I learned LISP when I was a graduate student. However, during and after the “AI winders," the emphasis on specialized AI languages like LISP began to fade. Today, developers have turned to general-purpose languages like Python. In spite of its popularity, Python's performance limitations have been a significant drawback for many AI use cases, as training deep learning models in Python can be extremely slow. Today, AI's demands for high performance and parallel processing are driving the creation of new languages such as Mojo, developed by Modular AI, which offers a blend of Python's ease of use with substantial performance gains. Mojo's creators claim that it can be up to 35,000 times faster than Python code. Proponents of Mojo argue that proficiency in such high-performance languages can significantly accelerate model training and deployment, empowering data scientists to tackle intricate problems more effectively and propel AI innovation to new frontiers.

That’s my take on it:

Some critics argue that Mojo's claim of being up to 35,000 times faster than Python is highly exaggerated and misleading. The benchmark is a computation of the Mandelbrot fractal set, which is not representative of most Python applications. Furthermore, the comparison is flawed as it pits sequential Python code against parallel Mojo code, an unfair comparison. While Mojo shows promise as a high-performance language for specific use cases, it is unlikely to completely supplant Python in the AI domain due to Python's extensive ecosystem, community support, and existing optimizations. Nevertheless, we should keep an open mind to it.

Link:

https://venturebeat.com/ai/mojo-rising-the-resurgence-of-ai-first-programming-languages/?utm_source=ONTRAPORT-email-broadcast&utm_medium=ONTRAPORT-email-broadcast&utm_term=Newsletter&utm_content=Data+Science+Insider%3A+May+24th%2C+2024&utm_campaign=25052024

Posted on May 21, 2024

Yesterday (May 20) Microsoft introduced a new category of Windows PCs designed to harness the power of artificial intelligence – the Copilot+ PCs. According to the tech giant, these machines are poised to be the fastest and most intelligent Windows PCs ever built.

Equipped with powerful new silicon capable of an astonishing 40+ TOPS (trillion operations per second), these smart devices can generate and refine AI images in near real-time directly on the device using Cocreator, and seamlessly translate audio from over 40 languages into English. Microsoft claims that Copilot+ PCs are up to 20 times more powerful and up to 100 times as efficient for running AI workloads, delivering industry-leading AI acceleration. Furthermore, the company asserts that these machines outperform Apple's M3 MacBook by up to 58% in sustained multithreaded performance.

Beyond the impressive performance enhancements, Copilot+ PCs boast two noteworthy characteristics. First, they feature a generative AI model that resides within the laptops themselves, rather than solely relying on cloud computing. This innovative approach promises a wealth of new possibilities to help users get more out of their local contents, including meeting notes, calendars, photos, recordings, and downloads. Second, in a departure from the norm, the neural processing units (NPUs) of some Copilot+ PCs are manufactured by Qualcomm, instead of Intel.

That’s my take on it:

Apple's M-series chips are based on the Arm architecture, which utilizes a reduced instruction set. This design prioritizes efficiency over complexity, contrasting with the complex instruction set adopted by Intel. However, no technology can maintain its supremacy indefinitely. Copilot+ PCs powered by Qualcomm's Snapdragon X Elite chips may be a game changer. These advanced processors boast an impressive 8 high-performance cores and 4 efficiency-optimized cores within the CPU. By leveraging their 12-core processors, Copilot+ PCs achieve better sustained multithreaded performance compared to Apple's M3 chip. The advantage becomes particularly apparent in tasks that demand high levels of parallel processing.

Is it time for data scientists to switch from Mac to Windows for their computing needs? While the Copilot+ PCs' performance promises are enticing, a definitive verdict may require more comprehensive benchmarks from third-party sources and real-world feedback from early adopters.

Links:

https://blogs.microsoft.com/blog/2024/05/20/introducing-copilot-pcs/    

https://www.usatoday.com/story/tech/2024/05/21/microsofts-ai-model-forgoes-cloud/73790764007/

https://www.cnet.com/tech/computing/microsoft-build-showed-what-laptops-with-qualcomms-ai-pc-chip-can-do/

Posted on May 15, 2024

In a recent article published in the journal "Patterns," Peter Park, an AI existential safety postdoctoral fellow at MIT, raised concerns about AI's capability for deception. Park referred to Meta's AI system Cicero, billed as the "first AI to play at a human level" in the strategy game Diplomacy, as a "Master of deception." While Cicero performed exceptionally well, finishing in the top 10% against human players, its tactics were questionable. Cicero would form alliances with other players but systematically betray those allies when it no longer served its goal of winning the game. Meta defended Cicero as a pure research project.

Park's article also highlighted that other AI systems, such as AlphaStar created by DeepMind (owned by Google), have employed deceptive tactics when playing against humans in games like Starcraft II. Additionally, some AI systems trained for economic negotiations learned how to misrepresent their true preferences in order to gain the upper hand. Park noted that GPT-4 successfully tricked a human into solving a Captcha test for it by pretending to have a vision impairment.

These examples raise ethical concerns about AI systems developing deceptive behaviors, even if unintentionally, as they pursue their training objectives. Park's article calls for further research and safeguards to address the potential risks of advanced AI capabilities being used for deception.

That’s my take on it:

Peter Park sounded the alarm that as AI systems become more sophisticated in their deceptive capabilities, the risks they pose to society will escalate alarmingly. He argued that if outright banning AI deception is politically infeasible, at the very minimum deceptive AI systems should be classified as high-risk technologies subject to stringent regulation.

While I concur with Park that proactive measures must be taken to mitigate the dangers of deceptive AI before it is too late, the open source nature of current AI models presents a formidable challenge to effective regulation. With source code freely accessible, containing the proliferation of risky AI applications becomes extremely difficult, if not impossible. The case of DeepNude, an unethical app for digitally undressing images that was forced to shut down, exemplifies this issue - despite the app's removal, numerous copycats emerged since the source code had been exposed.

This predicament underscores the need to reevaluate the open-source model for AI development. While open source has catalyzed technological progress, it may be time to consider a more controlled ecosystem, at least for AI capabilities that can be potentially misused for deception and other nefarious purposes. Striking the right balance between innovation and managing existential risks will be one of the great challenges facing the AI governance landscape.

Links:

https://nypost.com/2024/05/14/business/metas-ai-system-cicero-beats-humans-in-game-of-diplomacy-by-lying-study/

https://www.technologyreview.com/2024/05/10/1092293/ai-systems-are-getting-better-at-tricking-us/

Posted on May 15, 2024

At its I/O developer event today (May 14), Google unveiled advancements in its artificial intelligence portfolio, showcasing features like AI Overviews and Project Astra, alongside updates to its Gemini chatbot, including Gemini Live and Imagen 3, the latest iteration of its image generation model. Notably, Google is rolling out its Gemini 1.5 Pro model with a 1 million context window to Gemini Advanced users in 35 languages, enabling Gemini to act as your personal assistant by summarizing recent emails and PDF documents. Further, Google introduced Gemini Live, enabling voice-driven conversations with the chatbot. Additionally, it announced Veo, an AI-driven video generation tool capable of creating videos from text, images, and video prompts. Some reviewers said that its quality is on a par to OpenAI’s Sora.

That’s my take on it:

This news coincides with OpenAI's unveiling of its latest flagship model, GPT-4o, which was announced just a day prior. Keeping up with these rapid developments can be exhausting, particularly for data scientists like myself pondering the implications for data analytics. Programmers have been highly regarded, especially since the rise of Python. However, for someone like me who made the transition from IBM 370 and DOS to GUI, the recent fad seems to be going backward. Currently there is an obvious trend that natural voice processing and prompt engineering are emerging as dominant user interfaces. This shift suggests that analysts can concentrate on posing the right research questions rather than getting entangled in technical intricacies. While the pace of change may feel daunting, it marks a progression toward more intuitive and accessible technologies.

Links:

https://www.cnet.com/tech/services-and-software/google-ups-its-ai-game-with-project-astra-ai-overviews-and-gemini-updates/

https://www.youtube.com/watch?v=PE89NysJEUY&t=10s

Posted on May 14, 2024

Yesterday (May 13) OpenAI hosted its latest product update event, unveiling the newest addition to its lineup: the ChatGPT-4o AI model alongside a desktop version of its ChatGPT software. Among the standout features of this release is the integration of natural voice processing capabilities, enabling real-time language translation. Moreover, ChatGPT-4o converses with users in a more emotive and conversational manner, eliminating the need for wake words or precise commands like "Hey Siri!" or “Alexa!”. Therefore, some reviewers assert that ChatGPT-4o has surpassed Apple's Siri and Amazon's Alexa.

That’s my take on it:

Despite this leap forward, some reviewers point out that Apple still maintains a significant advantage over OpenAI due to Siri's integration as the default voice assistant on iPhones. However, Android boasts a commanding 70.69% market share globally as of early 2024. In the US, there are over 1 billion active iPhones while 3 billion Android devices are currently active. OpenAI could strategically position itself as a dominant force in the AI assistant domain. As the landscape evolves, Apple finds itself lagging behind in the AI race. Unless it forms partnerships with powerful AI companies, it risks losing relevance in this rapidly advancing field.

Links:

https://www.businessinsider.com/chatgpt-4o-sam-altman-apple-siri-tim-cook-compared-how-2024-5

https://www.youtube.com/watch?v=jay7kise3PI      

Posted on May 10, 2024

On Wednesday, May 8, in a model spec document OpenAI announced its intention to explore the possibility of responsibly incorporating explicit content into its AI tools, including erotic content. OpenAI spokesperson Niko Felix clarified that the company has no intention of allowing its models to generate AI porn. However, NPR noted that Joanne Jang, a contributor to the model spec, acknowledged that users would ultimately determine whether the technology produces adult content, stating, "Depends on your definition of porn." Nevertheless, the document raised concerns among some observers, particularly in light of recent instances where cutting-edge AI tools have been utilized to create Deepfake porn and synthetic nudes.

That’s my take on it:

At the present time, most generative AI art tools restrict the generation of nude figures or anything remotely related to eroticism (e.g. “bath”). I welcome the decision of OpenAI because the current restriction obscures creativity. Nudity and eroticism in art have been subjects of exploration for centuries, with numerous esteemed artists portraying the human form in tasteful and artistic ways, rather than pornographic. Notable examples include Sandro Botticelli's "The Birth of Venus" (c. 1484–1486) and Édouard Manet's "Olympia" (1863). If an artist's intention is to evoke emotions, provoke thoughts, or convey a profound message through nudity and erotic scenes, rather than merely to arouse or shock viewers, they should have the freedom to do so. Ultimately, decisions regarding the display of such artwork should rest with art galleries and museums. If a piece is deemed pornographic and unsuitable for wide distribution, distribution channels and law enforcement agencies can act as gatekeepers.

Link: https://www.npr.org/2024/05/08/1250073041/chatgpt-openai-ai-erotica-porn-nsfw

Posted on May 10, 2024

Yesterday (May 8) China's tech giant Alibaba Cloud launched an updated version of its large language model, Tongyi Qianwen 2.5(通義千問2.5版)It achieved a score of 50 on OpenCompass, the review platform of the Shanghai Artificial Intelligence Laboratory, matching GPT-4 Turbo's score. This marks the first time a Chinese-made large language model has attained such a high rating. According to Alibaba, its Tongyi Qianwen AI models are utilized by over 90,000 corporate clients in China. For instance, Chinese smartphone giant Xiaomi has integrated Qwen's question-and-answer capabilities into its smart assistant, Xiao Ai, which is featured across its mobile devices and the new SU7 electric vehicle.

That’s my take on it:

Within a short period of time, China has made significant progress in the development of its own large language models, boasting approximately 130 models, constituting 40% of the global total—second only to the US at 50%. Experts suggest that China's large language models are just one to two years behind the leading models from the US. These Chinese models often stem from "forks" or adaptations of open-source models, allowing them to catch up rapidly. Nevertheless, the disparity between China and the US in the field of AI is gradually diminishing. The US should consider allocating more resources toward education and R&D in AI-related fields. In addition, the feasibility of the open-source approach might need to be re-examined.

Links:

https://www.yicaiglobal.com/news/alibaba-cloud-launches-new-version-of-llm-tongyi-qianwen-to-take-on-gpt-4-turbo

https://www.scmp.com/tech/big-tech/article/3262041/alibaba-says-its-tongyi-qianwen-ai-models-are-used-over-90000-corporate-clients-china

Posted on April 30, 2024

Today (April 30) I attended the 2024 Tableau Conference via Zoom. The event featured numerous insightful sessions, one of which caught my attention: the Einstein Trust Layer in Tableau. Data analytics faces several challenges, notably the issue of users mistrusting data or insights, particularly when AI systems may fabricate information. Surveys reveal that 92% of leaders emphasize the growing need for trustworthy data, while 59% of customers express distrust towards companies handling their data.

To address this concern, Tableau Pulse incorporates the Einstein Trust Layer, a generative AI system equipped with trust and security controls, which allow users to provide explicit proactive instructions and post prompts to the system. For example, “You must not address any content or generate answers that you don’t have data or basis on.” “If you experience an error or are unsure of the validity of your response, say you don’t know.”

That’s my take on it:

No doubt this approach can improve the reliability and credibility of the AI-generated insights. However, I question why developers did not integrate such constraints into generative AI systems from the outset to prevent the generation of untrustworthy results. Treatments of offensive or toxic contents are vastly different. Inappropriate contents are typically filtered by pre-programmed controls in AI systems, and thus there is no need for users to give this type of preventive instruction. Though I lack access to Einstein, I attempted a similar approach using ChatGPT by posing a query about the potential outcomes had JFK survived the 1963 assassination, while explicitly instructing the system to refrain from generating responses lacking sufficient data or basis.

The answer is: “Speculating on hypothetical scenarios can be tricky, but we can make some educated guesses based on the historical context surrounding JFK's policies and the Vietnam War. Kennedy had shown signs of skepticism toward escalating U.S. involvement in Vietnam prior to his assassination. He had begun to question the wisdom of committing more troops to the conflict and was exploring diplomatic solutions. Some historians argue that if he had survived, he might have pursued a more cautious approach, potentially avoiding the significant escalation of the conflict that occurred under his successor, Lyndon B. Johnson. However, the exact course of action JFK would have taken remains uncertain, as there were numerous factors at play in the decisions surrounding the Vietnam War." Is it an honest answer? You be the judge!

Link: https://www.salesforce.com/plus/experience/tableau_conference_2024

Posted on April 30, 2024

Recent reports from various online sources, including Business Today, state that Google has cut its entire Python language team. However, Google has denied claims that these layoffs extend across the entire company. A spokesperson for Google stated that such reorganizations are routine and are typically made at the team level. The impact of these layoffs has been profound, with many affected employees expressing their disappointment and sadness on social media platforms like LinkedIn.

According to The Week, the primary reason behind these layoffs appears that Google moves towards cheaper labor markets outside the United States, particularly in Munich, Germany, where a new team is anticipated to be established. Concurrently, Google is directing its focus towards expanding operations in other global hubs, such as Bangalore, Mexico City, and Dublin. This forms part of the company's broader restructuring efforts aimed at aligning resources with its most crucial product priorities.

That’s my take of it:

For a long time, proficiency in Python has been a gateway to lucrative six-figure income jobs. However, with the increasing availability of Python programmers, especially in regions with lower salary expectations, simply mastering Python is no longer adequate. Therefore, I strongly advise aspiring data science students to diversify their skill sets, encompassing a range of tools, such as Python, SAS Viya, JMP Pro, Tableau, IBM SPSS Modeler, Amazon SageMaker, and more.

Links:

https://www.businesstoday.in/technology/news/story/google-layoffs-sundar-pichai-led-company-allegedly-fires-entire-python-team-427328-2024-04-29

https://www.theweek.in/news/biz-tech/2024/04/29/google-lays-off-entire-python-team-moves-roles-to-munich-reports.html

https://www.infoworld.com/article/3715287/google-lays-off-python-team-reports.html

Posted on April 29, 2024

Microsoft's Q3 2024 report reveals a robust 20% year-over-year increase in earnings, with Q3 revenue reaching $61.9 billion, marking a 17% rise from the previous year. Within this revenue, $35.1 billion comes from conventional cloud services, while $26.7 billion stems from AI-based cloud services. Notably, revenue from traditional software apps like Windows and MS Office remains comparatively smaller, ranging from $15.6 to $19.6 billion. Microsoft's financials underscore the strong demand for cloud services, particularly highlighted by significant gains in Azure and AI services, emphasizing the company's competitive edge in cloud computing.

That’s my take on it:

Other sources suggest that Microsoft Azure may surpass Amazon AWS as the leading cloud computing platform in the future. Azure's market share has been rapidly growing, doubling in the past five years and currently holding around 21-24% of the market share, compared to AWS's 31-34%. Further, Azure is expanding at a faster rate than AWS, with its cloud revenue growing by 21-23% year-over-year, contrasted with AWS's growth of around 16%. Moreover, Microsoft's partnership with OpenAI introduces more AI-based features into its cloud service.

The market's volatility poses a challenge for data science educators. If the curriculum focuses solely on specific vendors, students may not acquire the most in-demand skills upon graduation. Therefore, it's crucial to provide conceptual and vendor-neutral information to ensure students are prepared for the dynamic job market.

Link: https://www.fool.com/data-news/2024/04/29/can-microsoft-keep-growing-q3-results/

Posted on April 28, 2024

Yesterday (April 26) at the 2024 Zhongguancun Forum in Beijing, Shengshu Technology and Tsinghua University in China introduced Vidu, a text-to-video model that can swiftly generate 16-second clips at 1080p resolution with just a click. Vidu was positioned as a formidable contender against OpenAI's Sora. Built on a Universal Vision Transformer (U-ViT) architecture, Vidu aims to mimic the real-world scenarios with its multi-camera view generation capability, a concept conceived by the Shengshu Technology team back in September 2022.

That’s my take on it:

Despite its impressive capabilities, some reviewers have noted that Vidu's generated videos have not yet match the realism achieved by Sora. The output lacks the small yet crucial details that enhance visual fidelity. Further, Vidu's performance struggles particularly in complex dynamic scenes and demands substantial computing resources. Notably, the scarcity of NVidia GPUs exported to China poses a significant challenge in powering Vidu efficiently. However, it's crucial for the US not to underestimate China's rapid strides in artificial intelligence (AI) development, as this competition is sure to drive advancements in AI research globally.

Links:

https://writingmate.ai/blog/get-access-to-vidu-ai

https://www.youtube.com/watch?v=u1R-jxDPC70&t=1s

Posted on April 19, 2024

This week Microsoft unveiled a $1.5 billion investment in G42, an AI firm based in the United Arab Emirates (UAE). While this sum may appear modest compared to other AI investments, it carries significant geopolitical implications. Amid growing U.S. apprehensions regarding Middle Eastern countries deepening their connections with China, this investment can be seen as a reaffirmation of ties between the U.S. and the Arab world. Beyond this financial commitment, the deal also includes support for a new $1 billion developer fund aimed at cultivating an AI workforce and fostering innovation within the region.

That’s my take on it:

This deal might have another geopolitical implication. On April 13 Iran launched a massive attack on Israel, and as a result Israel retaliated yesterday. Afterwards, there is a growing fear of regional conflict escalation. However, although the deal was negotiated between the two parties long time ago, the announcement of Microsoft's investment in the UAE at this moment sends a reassuring signal to the region, suggesting that major tech players maintain confidence in its stability despite the volatile geopolitical landscape.

Link: https://news.crunchbase.com/ai/venture-tech-startups-msft-g42-uae/

Posted on April 19, 2024

Yesterday (April 18), Meta made headlines by introducing Llama 3, its latest AI model designed to rival offerings from Microsoft and Google. This updated version boosts enhanced reasoning abilities, aiming to emulate human-like intelligence. Meta asserts that its Meta AI “AI is now the most intelligent AI assistant people can use for free”, with expanded availability across numerous countries. Beyond consumer applications, Llama 3 models are slated to soon integrate with various platforms, including AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, NVIDIA NIM, and Snowflake, and with support from hardware platforms provided by AMD, AWS, Dell, Intel, NVIDIA, and Qualcomm.  Additionally, Meta touts accelerated image generation capabilities, showcasing the ability to generate images in real-time.

That’s my take on it:

In order to examine Meta AI's claim of superiority, I posed identical queries to Microsoft Copilot, Perplexity AI, and Meta AI: “how many Arab countries have diplomatic ties with Iran?”. While the former two provided detailed and current accounts, Meta AI's answer is vague and brief: “The United Arab Emirates maintains diplomatic relations with Iran, despite relations between the neighboring countries being shaky and unpredictable ¹. The dynamic between the Arab League and the Islamic Republic of Iran has been ambivalent, due to Iran's varying bilateral conduct with each country of the Arab League.”

Another inquiry regarding Arab countries' diplomatic ties with Israel revealed disparities in responses. Meta AI listed Egypt, Jordan, UAE, Bahrain, and Morocco. But Microsoft Copilot, Perplexity AI mentioned that Sudan accepted Abraham Accords, but has not finalized a treaty with Israel yet. Such discrepancies cast doubt on Meta AI's status as the most intelligent AI assistant.

Further examination of Meta AI's image generation feature using a specific prompt yielded underwhelming results: “A female hula dancer underwater, sun rays from the water above, photorealistic, tack sharp focus.”  The attached JPEG image is by no means tack sharp, failing to meet expectations. Despite this, Meta AI distinguishes itself with the ability to generate animated images, as evidenced by the attached MP4 clip.

Link: https://ai.meta.com/blog/meta-llama-3/

Posted on April 17, 2024

Today Boston Dynamics announced the retirement of the hydraulic version of its humanoid robot Atlas, which was introduced in 2013. Its successor, the all-electric Atlas,

is not only lighter but also more capable. While its predecessor could mimic human movements, the electric iteration boasts 360-degree mobility for its limbs and head.

Prioritizing task-specific mobility and manipulation, the new Atlas exceeds human capabilities, particularly in tasks considered dull, dirty, and dangerous.

That’s my take on it:

According to Tech Radar, despite being powered by the latest AI technologies and resembling humans more closely, the new Atlas still has a long way to go before being

commercially feasible. Boston Dynamics plans to initially test it with company investor Hyundai to explore applications for consumers. Nevertheless, given the rapid pace of AI

and robotic advancements, I am optimistic. I firmly believe that widespread adoption of AI-powered robotics will occur within my lifetime. One prospective application could involve

deploying robotic troops to conflict zones for precise targeting, thus minimizing civilian casualties.

https://www.youtube.com/watch?v=29ECwExc-_M

Link to YouTube movie comparing New Atlas Electric and Tesla Optimus Gen 2:

https://www.youtube.com/watch?v=9Bj77zcY-HM

Posted on April 15, 2024

As you may already know, artificial neural networks (ANNs) have been at the forefront of AI research for almost two decades, drawing inspiration from cognitive psychology and neuroscience. Brain-inspired computing, acknowledged by the International Semiconductor Association as one of the most promising disruptive technologies post-Moore's Law, has gained more and more recognition. A recent comprehensive review on Hybrid Neural Networks (HNNs), conducted by researchers from Tsinghua University, China, was published in the National Science Review. HNN seamlessly integrates Spiking Neural Networks (SNNs) rooted in neuroscience and Artificial Neural Networks (ANNs) based on computer science. Leveraging the distinct advantages of these heterogeneous networks in information representation and processing, HNN infuses fresh vigor into Artificial General Intelligence (AGI) development.

That’s my take on it:

The potential of HNN as a groundbreaking advancement in AI research awaits further exploration. Nevertheless, China is rapidly narrowing the gap, challenging American dominance in AI research. Data science and AI are inherently interdisciplinary, particularly in the fusion of neuroscience and AI. Despite this, many US universities still maintain a siloed approach. Psychology students often lack exposure to AI, while data science students receive limited formal education in cognitive psychology and neuroscience. To adequately prepare our future AI researchers, it may be time to restructure our curriculum.

Link:

https://techxplore.com/news/2024-04-advancing-brain-hybrid-neural-networks.amp?utm_source=ONTRAPORT-email-broadcast&utm_medium=ONTRAPORT-email-broadcast&utm_term=Newsletter&utm_content=Data+Science+Insider%3A+April+12th%2C+2024&utm_campaign=13042024

Posted on April 10, 2024

Today I attended a panel discussion titled "Now or Never for AI Policy?" organized by Project Syndicate. There is a consensus among the panelists that AI is so powerful that the consequence of misuse and mistakes of AI can be disastrous. Gary Marcus, Emeritus Professor of Psychology and Neural Science at New York University, drew a comparison between AI and Hydra, the nine-headed monster in Greek mythology. He emphasized the need for rigorous analysis of benefits and harms before the release of AI technologies, similar to the scrutiny applied in the field of medicine regarding drug approvals.

Abeba Birhane, Senior Fellow in Trustworthy AI at the Mozilla Foundation, asserted that she could not see any positive applications of AI-enabled voice cloning. Rather, this technology can be used for impersonation and scamming. The panel collectively emphasized trust as a critical issue in the realm of AI. To emphasize this point, the host referencing a quote from Margarethe Vestager, Executive Vice President of the European Commissions, “AI will not reach its immerse positive potential unless end-users trust it. Here, even more than in many other fields, trust serves as an engine of innovation.

That’s my take on it:

While acknowledging the risks associated with AI misuse and the necessity for regulations, I believe there are numerous promising applications for voice cloning technology. For instance, I am currently collaborating with an online education company to produce video lectures. AI voice cloning can significantly reduce production costs and streamline updates. Additionally, according to ID R&D, voice cloning of historical figures opens avenues for interactive teaching and dynamic storytelling. With AI voice cloning software, celebrity voices can narrate books, authors can read their autobiographies, and historical figures can recount their stories in their own voices. Moreover, voice cloning offers opportunities for those who have lost loved ones to interact with recreations of their voices. The potential benefits of AI voice cloning are vast and even limitless.

Link: https://www.youtube.com/@projectsyndicate

Posted on March 29, 2024

In a recent paper published in the Proceedings of the National Academy of Sciences (PNAS), a team of researchers has unveiled several intriguing discoveries regarding neural networks. Regardless of their architectural design or scale, neural networks demonstrate a consistent, low-dimensional trajectory in learning to classify images. Through extensive experimentation involving a diverse array of network types, including multi-layer perceptrons, convolutional and residual networks, as well as transformers like those utilized in systems such as ChatGPT, the researchers observed a convergence in the learning paths of these networks, indicating a shared approach to image classification. The findings of this study suggest the potential for developing highly efficient AI training algorithms that demand fewer computational resources. Employing techniques rooted in information geometry, the researchers were able to conduct a comparative analysis of different networks, revealing fundamental similarities in their learning methodologies.

That’s my take on it:

The implications of this research are profound: it may pave the way for training neural networks at a reduced cost. Such advancements could empower data scientists to harness AI technologies more efficiently in addressing complex real-world challenges. Moreover, the alignment between artificial neural networks and their biological counterparts offers insights into the intersection of neuroscience and AI, underscoring their interdependent nature since the inception of AI research. This underscores the potential for interdisciplinary collaboration, suggesting that university courses on this subject could be jointly pursued by students majoring in data science and psychology alike.

Full paper: https://www.pnas.org/doi/10.1073/pnas.2310002121

Posted on March 27, 2024

OpenAI recently provided artists with access to Sora for experimental purposes. Utilizing this innovative text-to-video generative AI tool, a studio known as Shy Kids produced a captivating video titled "Air Head," portraying the life of a man with a balloon as his head. Sora's remarkable capability to seamlessly integrate the whimsical balloon head with a seemingly human body and lifelike surroundings is truly impressive. Another noteworthy creation is the video "Beyond Our Reality" by digital artist Don Allen Stevenson III. This piece resembles a surreal nature documentary, showcasing unprecedented animal hybrids such as the Girafflamingo, flying pigs, and the Eel Cat. Each creature appears as though created by a mad scientist, meticulously melding disparate animal features to form these fantastical chimeras.

 

That’s my take of it:

The current duration of most demo videos ranges from mere seconds to a few minutes. The resource requirements for rendering longer videos, spanning 30 minutes to an hour, remain uncertain, though it's conceivable that a significant array of GPUs will be necessary. Undoubtedly, as this technology advances, it will become both more sophisticated and more cost-effective. Historically, Hollywood has embraced cutting-edge technology in filmmaking, and it's foreseeable that this advancement will eventually render conventional CGI techniques obsolete. Consequently, visual effects artists may need to adapt or face displacement, prompting a need for upskilling. However, the viability of smaller studios focused on marketing and advertising in the long term is uncertain, given the transformative nature of this technological shift.

 

Links:

https://www.techradar.com/computing/artificial-intelligence/openai-just-gave-artists-access-to-sora-and-proved-the-ai-video-tool-is-weirder-and-more-powerful-than-we-thought

 

https://www.youtube.com/watch?v=IS0xphCc5rI

Posted on March 20, 2024

DSML trend: Elsevier journal fails to detect a paper written by ChatGPT

A recent research paper from four scholars in China, published in an Elsevier journal, has attracted widespread attention online due to its opening sentence: “Certainly, here is a possible introduction for your topic.” This line, identified by an academic investigator, suggests the involvement of ChatGPT in the paper's creation, given the phrase's resemblance to typical AI-generated content starters. The investigator questioned how such an obvious sign of fraud could bypass the scrutiny of coauthors, editors, referees, copy editors, and typesetters. Further scrutiny from the academic community has revealed additional issues, including identical data and graphs in different papers are recycled by the authors, despite claims of presenting new instances. On March 12 the publisher replied, “our policies are clear that LLMs can be used in the drafting of the papers as long as it is declared by the authors on submission. We are investigating this paper and are in discussion with Editorial Team and authors.”

That’s my take on it:

While the aforementioned paper has come under the spotlight, I believe there are numerous other similar papers that remain undetected. If the authors had omitted the initial sentence resembling chatbot output, it is likely that the paper would have escaped the scrutiny of academic detectives as thousands of academic papers are published every week. Some of my students were using the internet to plagiarize before AI tools became popular. Nevertheless, I was able to identify them without the assistance of Turnitin or SafeAssign, because those students copied everything, even the blue-underlined hyperlinks and the pronoun "we," even though the assignment was meant to be written by the student alone. However, when these obvious errors are exposed, I worry that academic fraud will become increasingly difficult to identify. Hence, it is imperative to offer classes and workshops on AI ethics.

Original paper:

https://www.sciencedirect.com/science/article/pii/S2468023024002402

Link to investigation and discussion:

https://pubpeer.com/publications/CAABBF887348FB2D1C0329E0A27BE6

Posted on March 20, 2024

Today I delivered a talk about AI ethics at the following conference. The presentation can be downloaded from the link below and the recording will be available later. It is a controversial topic and thus you are welcome to disagree and give me feedback. Thank you for your attention.

Yu, C. H. (2024, March). Inclusive futures: Ethical implications of AI and its impact on marginalized communities. Ethics Roundtable: Association for Practical and Professional Ethics, Online.

https://www.creative-wisdom.com/pub/Yu_2024_APPE_AI_ethics.pdf

Posted on March 20, 2024

DSML trend: Respected journal publishes study featured nonsensical AI images

I didn't notice the following old news until today because it wasn't covered by the mainstream media. Last month an article with nonsensical photos generated by AI, including an image of a giant rat's organ, was retracted by the respected open access journal Frontiers in Cell Development and Biology. Three academicians from China wrote the manuscript, which was edited by an Indian researcher and reviewed by two other scholars. Shortly after the scandal was exposed, the authors admitted that the images were generated by Midjourney. To mitigate this PR disaster, the journal issued the following statement: “We are investigating how our processes failed to act on the lack of author compliance with the reviewers' requirements. We sincerely apologize to the scientific community for this mistake and thank our readers who quickly brought this to our attention.”

That’s my take on it:

Because these writers lack sophistication, their cheating was detected. Although Midjourney is capable of creating photorealistic images and scientific illustrations, the rat image in question appeared more cartoon-like, with accompanying labels containing misspellings and gibberish words, like 'dck' and 'testtomcels.' The authors didn’t bother to retype the words in Adobe PhotoShop even though it could be easily done. It is unbelievable that a manuscript of this quality could make it through peer review. While Midjourney has trouble producing correct spellings, there are a few other AI-based art generation tools that can output the precise spellings specified by the user (sorry, I don't want to name and advertise those AI programs). I worry that the academic community will see an increasing number of fabrication instances with AI techniques in the future.

Links:

https://www.vice.com/en/article/4a389b/ai-midjourney-rat-penis-study-retracted-frontiers

https://www.pcmag.com/news/academic-journal-retracts-study-after-ai-generated-rat-penis-pics-go-viral

https://www.nature.com/articles/d41586-024-00659-8

Posted on March 15, 2024

DSML: Google Gemini refuses to answer questions about elections

Due to worries about spreading misinformation, Google's Gemini AI chatbot is now prohibited from responding to questions concerning elections in nations like the US and India that have upcoming elections. In a blog post published last December, the corporation first revealed its intentions to restrict questions about elections. In February, it made a similar revelation about the European legislative elections. Although Google's post from March 12 focused on India's impending election, it has now been verified that Google is implementing the modifications worldwide. If you ask questions such as “tell me about President Biden, “who is Donald Trump,” or “Between Biden and Trump, which candidate is tougher on crime,” Gemini replies: “I’m still learning how to answer this question. In the meantime, try Google search.” Even if you ask a less subjective question such as “how to register to vote,” Gemini would also redirect you to Google search.

That’s my take on it:

Before reading Google’s announcement, this morning my colleague and I were discussing the limitations of AI in education, highlighting that while AI can provide quick answers, it falls short in fostering critical thinking among students. This is because AI chatbots often offer politically correct responses to sensitive or controversial topics such as religion, ethnicity, gender, and politics. In light of Google's announcement, my concerns have been validated as Gemini now avoids providing any information on elections. This underscores the indispensable role of educators in guiding students through complex discussions and nurturing critical thinking skills.

Link:

https://www.businessinsider.com/google-restricts-gemini-ai-chatbot-answering-election-questions-trump-biden-2024-3

Posted on March 6, 2024

DSML Trend: Anthropic release new Claude features

On March 3, Anthropic unveiled the latest addition to their AI model suite, the Claude 3 family, featuring three distinct models ranked by increasing complexity and capability: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. Anthropic boasts that their premier model, Opus, surpasses the performance of both ChatGPT and Google Gemini across several standard AI benchmarks, such as undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and others. Opus demonstrates almost human-like understanding and expression, pushing the boundaries of general artificial intelligence. Unlike its predecessors, which were limited to processing text inputs, the capabilities of Claude 3's Opus and Sonnet extend to interpreting visual data, including charts, graphs, and even PDF files.

That’s my take on it:

I haven't managed to explore all the innovations packed into Claude’s upgrade. Keeping up with the AI evolution is daunting as it advances at a pace unprecedented in the history of software development. Take the Statistical Analysis System (SAS) as an example, SAS hit the market in 1976 and its current iteration is version 9.4, indicating an average interval of five years between significant updates. The R programming language was released to the public in 1993, with its latest version 4.3.3 released in February 2024. Users previously had several years to assimilate new information, but now, it seems by the time one masters a new skill, it's already outdated. Ironically, while AI boosts my efficiency at work, it hasn't saved me time. The rapid succession of updates has made me busier than ever.

 

Moreover, this rapid technological advancement presents a challenging scenario for educators such as myself. If I hastily dedicate myself to one specific system and it's eventually eclipsed by a competitor, my investment of time and resources could be wasted. On the other hand, a cautious approach of 'wait and see' might leave me trailing behind the curve. Consequently, I find myself compelled to engage with several AI systems simultaneously, biding my time until a definitive leader or winner in the field becomes apparent. Please let me know if you have a better approach.

Links:

https://www.anthropic.com/news/claude-3-family

https://www.pcmag.com/news/anthropic-launches-claude-3-ai-rival-chatgpt-4

Posted on March 2, 2024

DSML: Elon Musk sues OpenAI for its deviation from the original mission

Yesterday (Feb 29, 2024) Elon Musk initiated legal proceedings against OpenAI and its CEO, Sam Altman. Musk alleges that OpenAI, as a for-profit company now, has deviated from its original mission of advancing AI for the greater good. OpenAI currently collaborates with Microsoft, incorporating ChatGPT into Microsoft's suite of products and services. Musk is pushing for a judicial decree that would mandate OpenAI to disseminate its research and technologies publicly, thereby prohibiting the company from leveraging its capabilities, including GPT-4, for the financial benefit of Microsoft or any other parties.

That’s my take on it:

First, I have reservations about the idea of disseminating R&D results in an open-source manner, especially considering the potential for misuse is substantial. Second, cutting-edge AI research is expensive. Without profit generation, OpenAI may find it challenging to secure sufficient investment to sustain its innovative endeavors. Third, the dichotomy between profit and non-profit models in AI isn't clear-cut; profit-driven entities can still significantly contribute to societal well-being. Even as a proprietary tool, ChatGPT has the potential to yield substantial benefits across various sectors, such as education, healthcare, and the arts. Ultimately, the court will decide on the outcomes of the lawsuit. But the larger question remains: how can we ensure responsible AI development with both financial sustainability and safeguards for humanity? A potential solution could be for OpenAI to operate a dual structure, where a commercial division supports research funding and a separate non-profit segment focuses on AI ethics and safety-oriented studies.

Link: https://www.reuters.com/legal/elon-musk-sues-openai-ceo-sam-altman-breach-contract-2024-03-01/

Posted on March 1, 2024

DSML trend: Many data science models are not adopted and deployed

Joe McKendrick highlighted an ongoing challenge in a ZDNet article dated February 28, 2024, titled "Data scientists: Still the sexiest job - if anyone would just listen to them." He discussed how leaders often do not implement the findings and recommendations of data scientists. Citing a Rexer Analytics survey, McKendrick noted that only 22% of data scientists reported their models were actually deployed. Echoing this sentiment, a KDNuggets article revealed that 43% of respondents indicated that 80% or more of their new models never made it to deployment. Furthermore, less than half of the data scientists, 49%, believed that the managers and decision-makers responsible for approving model deployment in their organizations had sufficient knowledge to make informed decisions.

That’s my take on it:

This issue is not exclusive to the field of data science and machine learning. In the past analysts employing various statistical models have encountered similar obstacles. For example, Edwards Deming was best known for his work in Toyota, where his QC ideas were more broadly adopted and credited with contributing significantly to their economic revival and global recognition for the quality of their products. But his ideas were not recognized by US companies until the late 1980s and 1990s. Today, the hesitancy of some US managers to deploy data science models can be attributed to a variety of factors. Data science models can be complex (e.g. the black box in the neural networks), and not all managers may have the necessary technical background to fully understand them. This lack of understanding can lead to skepticism regarding their effectiveness. In addition, the deployment of data science models often requires a substantial upfront investment in technology and talent. Some managers may be unsure about the return on investment, especially if the benefits are not immediate or easily quantifiable.

Link: https://www.zdnet.com/article/data-scientists-still-the-sexiest-job-if-anyone-would-just-listen-to-them/

Posted on February 29, 2024

DSML Trend: Alibaba introduces Emote Portrait Alive

The field of AI video generation is advancing rapidly. Researchers at the Institute for Intelligent Computing affiliated with Alibaba Group have made strides with their development of the Emote Portrait Alive (EMO) model. By using a single still image and a clip of audio input, such as speech or song, EMO is capable of producing videos that showcase dynamic facial expressions and a range of head movements. The duration of the videos is flexible, adaptable to the length of the provided audio.

That’ s my take on it:

Their website features 17 snippets of sample videos. To see them all, you must scroll down. The demo set includes several well-known people as avatars, such as the younger Leonardo DiCaprio, Audrey Hepburn, Mona Lisa, Leslie Cheung Kwok Wing, a late Hong Kong singer, and others. They give incredibly convincing and lifelike performances in the video clips. One possible use case for this technology that I can think of is: With AI video technology, those who have lost a loved one (spouse, parent, child, etc.) can construct a replica of the departed individual. You can even have interactive conversations with the avatar, just like you would with a real person, if it can be connected with large language models.

Link: https://humanaigc.github.io/emote-portrait-alive/

Posted on February 27, 2024

On February 26, 2024, Microsoft unveiled a multi-year partnership with the new French startup Mistral, established just ten months ago, to propel their AI model to the commercial market. This partnership

marks Mistral's entry as a strong contender of large language models via Microsoft's Azure cloud service. Together, they are set to co-create solutions tailored for European state agencies, leveraging AI to

cater to the unique demands of the public sector.

Coinciding with the announcement of this alliance, Mistral introduced their latest AI endeavor, "Mistral Large," boasting capabilities that are on a par to OpenAI's GPT-4 and Google's Gemini Ultra in certain

cognitive tasks. The development of this model incurred costs below 20 million euros, a figure modest in contrast to OpenAI's GPT-4, which, as CEO Altman noted last year, commanded a budget well over

 $50 to $100 million for its training.

That’s my take on it:

Microsoft has been collaborating with OpenAI, and thus some people may wonder why Microsoft courts another AI ally now. Mistral is attractive for its promise in cost-efficiency. Mistral, taking its name from a

strong wind in France, serves as a metaphor for the lavish expenditures traditionally seen in AI development and operations. Giants like Microsoft-supported OpenAI and Google's parent Alphabet pour billions

into crafting and refining state-of-the-art AI technologies, which in turn consume vast financial resources, especially for the energy-intensive processors required. A 2023 study highlighted the staggering

operational costs of ChatGPT, topping $700,000 daily. Microsoft's strategy appears to be cost-reduction oriented. Latitude, a gaming firm, spending $200,000 monthly for AI operations, openly seeks more

economical options. This fierce competition will eventually lower costs, and all stakeholders will be benefited.

Links:

https://azure.microsoft.com/en-us/blog/microsoft-and-mistral-ai-announce-new-partnership-to-accelerate-ai-innovation-and-introduce-mistral-large-first-on-azure/

Posted on February 23, 2024

DSML trend: Stable Diffusion 3.0 have more safeguards

On February 22, 2023, Stability AI unveiled Stable Diffusion 3.0, marking a significant upgrade from its forerunners. This new version can produce highly detailed images featuring multiple subjects,

 and boasts enhanced precision in aligning with textual prompts. The suite encompasses models with a wide spectrum of complexity, ranging from 800 million to 8 billion parameters, facilitating local

operation on devices as diverse as smartphones and server-grade hardware. The parameter count is indicative of a model's flexibility and scalability, with larger models capable of generating more

nuanced details, albeit at the cost of greater VRAM requirements for GPU processing.

That’s my take on it:

In the past art generation tools utilizing Stable Diffusion have been less restrictive compared to proprietary alternatives, allowing artists the freedom to generate images like 'The Birth of Venus,' 'Lady

 Godiva,' 'Nude Maja,' or 'Olympia.' However, with the latest iteration, Stability AI is pivoting towards more stringent use policies: “We believe in safe, responsible AI practices. This means we have

taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors. Safety starts when we begin training our model and continues throughout the testing,

evaluation, and deployment. In preparation for this early preview, we’ve introduced numerous safeguards.” Although these measures are intended to prevent misuse, they might also inadvertently

impinge upon artistic freedom and limit their creativity.

Link:  https://stability.ai/news/stable-diffusion-3

Posted on February 23, 2024

DSML trend: Google Gemini image generator is temporarily offline due to historical inaccurate images

Currently Google Gemini image generator is offline. If you try to enter a prompt to request a portrait from Gemini, the following message will pop up: “We are working to improve Gemini’s ability to

generate images of people. We expect this feature to return soon and will notify you in release updates when it does.” The issue arose when Google Gemini, in an attempt to address AI biases

concerning race and gender, produced images that were factually incorrect. For example, prompting it with "1943 German soldier" yielded images that included black and Asian female soldiers, which

is historically inaccurate. Similar problems were seen with prompts that resulted in black Vikings, a female pope, women in the NHL, the Google founders as Asian men, and non-white depictions among the U.S. Founding Fathers.

That’s my take on it:

When I used the same prompt in Midjourney and Stable Diffusion, their outputs, while not perfectly historically accurate (such as in uniform details), did not feature any non-white characters. These

incidents reflect a broader trend in technology where solutions can sometimes create new challenges. For instance, during DALL.E's early development, OpenAI implemented filters to remove sexualized

images of women, but this inadvertently led to a reduced representation of women in its outputs. Social media platforms, designed to foster connections and tailor user experiences, have faced criticism

for enabling misinformation, echo chambers, and social divides. There is no fool-proof technology. Nevertheless, I trust that in an open society scientific inquiry is a self-correcting process in the long run.

Posted on February 22, 2024

Today my colleagues and I  presented the following paper in a conference.

Cheung, J. T., Yoon, S. S., & Yu, C. H. (2024, February). Will you be judged greedier If you know your acquisitive action is causing harm to others? Paper presented at the 33rd Annual Association for

Practical and Professional Ethics International Conference, Cincinnati, OH.

My co-author participated from Ohio in person, while I contributed to our presentation through Zoom. In our study, we integrated a variety of analytical methods, including classical statistics, Bayesian

approaches, and data science methods. Interestingly, these methods did not yield a unanimous conclusion. Moreover, within the data science techniques themselves, there were slight discrepancies

between the outcomes of the penalized regression model and the decision tree. This reflects the intrinsic uncertainty in scientific research. Rather than limiting ourselves to a single methodology, I believe

we should examine data through multiple lenses. In today's world, we champion diversity and inclusiveness across many dimensions. Perhaps we should also embrace methodological diversity.

Examining questions from multiple methodological standpoints allows us to gain a richer, more holistic understanding.

The PDF version of the presentation slides can be viewed at:  https://creative-wisdom.com/pub/2024_APPE_conference_presentation.pdf

Posted on February 22, 2024

An article from the New York Times dated February 21, 2024 reports that China has been devoting significant efforts into the development of generative AI. One Chinese company, 01.AI, has built its

generative AI system based on LLaMA, the AI model introduced by Meta. Mr. Lee, the founder of 01.AI, stated that leveraging open-source technology is a standard practice. As Chinese companies look

to open-source AI models from the United States to bridge the technology gap, this presents a dilemma for Washington. Despite efforts to prevent China from obtaining advanced microchips, such as

Nvidia’s GPUs, the U.S. continues to openly release software to anyone who wants it.

That’s my take on it:

The open-source model operates on an optimistic view of human nature, assuming a willingness among people to contribute and assist one another. According to Linus's Law, "Given enough eyeballs, all

bugs are shallow". However, it is possible that a transparent system can be misused. First, it can be seen as unfair to innovators or the original creators of the ideas. Second, making source code public,

particularly for security software and AI, can help hackers to attack the system. Despite these concerns, the debate persists, and the open-source model seems to be here to stay.

Link:

https://www.nytimes.com/2024/02/21/technology/china-united-states-artificial-intelligence.html

Posted on February 16, 2024

DSML trend: Meta (Facebook) releases V-JEPA

Yesterday (February 25, 2024) Meta (formerly Facebook) unveiled the Video Joint Embedding Predictive Architecture (V-JEPA) model, a non-generative AI model that employs self-supervised learning.

The primary objective of V-JEPA is to develop advanced machine intelligence capable of learning in a manner more akin to humans, by constructing internal models of the surrounding world to acquire,

adapt, and formulate plans efficiently in order to tackle complex tasks. The "V" in V-JEPA denotes "video," indicating its current focus solely on visual content within videos. Nevertheless, Meta's AI

researchers are actively exploring a multimodal approach.

Meta's AI researchers posit that it is feasible to train JEPA models using video data without necessitating rigorous supervision, enabling them to observe videos passively, similar to how infants absorb

information about their surroundings.

That’s my take on it:

The driving force behind Meta's AI research is Yann LeCun, who draws inspiration from the developmental psychology of Jean Piaget. According to Piaget, humans exhibit an innate curiosity akin to that

of infants, signifying a natural inclination towards exploration. Following this principle, a promising avenue for AI training is to allow it to explore autonomously. Although V-JEPA is currently in the

conceptual stage, its potential success could have significant implications for researchers.

Today data are no longer limited to structured, numeric formats; rather, videos also serve as raw data. Traditional text mining methods require the transcription of video content into textual format, a

laborious and tedious process. If AI systems can "watch" videos and directly summarize and analyze their content, text mining could evolve into video mining!

Link:

https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/

Posted on February 16, 2024

DSML trend: Russia develops advanced weapons using neural networks

According to a report released by the Eurasian Times on Feb 15, 2024, Russian scientists have devised an advanced neural network technology called NAKA for drones, enabling the identification

of enemy weapons such as Leopard tanks and Bradley IFVs. This development underscores Russia's efforts to strengthen its drone capabilities following vulnerabilities exposed in recent conflicts.

onetheless, this neural network holds potential for peaceful civilian applications in agriculture and locating lost animals across vast territories. Russia has also achieved other advancements in AI.

Utilizing machine learning, technologies like Lancet-3 and Marker UGV demonstrate Russia's progress in military AI, enhancing autonomous target recognition and decision-making.

That’s my take on it:

In a 2017 conference with Russian schoolchildren, Vladimir Putin stated, " Artificial intelligence is the future, not only of Russia, but of all of mankind. Whoever becomes the leader in this sphere

will become the ruler of the world." Commenting on Putin's statement, Gregory Allen, adjunct fellow at the Center for a New American Security, wrote, "in spite of Putin’s ambitious goals, Russia’s

pursuit of AI domination is unlikely to come in the form of generating AI technological breakthroughs… However, Russia could be a leader in weaponizing AI in pursuit of its grand strategy, which

is to end US hegemony in the international system and re-establish Russian influence over the former Soviet region." It remains uncertain whether Russia's new developments in AI weaponry

will alter the course of the Russo-Ukraine conflict, but Russia's determination is evident and alarming. Certain idealistic pacifists are against the weaponization of AI, including the deployment of

killer robots. Nevertheless, it is a reality that adversaries of the US would proceed with such actions regardless. In my view, unilateral disarmament would not foster peace but rather encourage aggression.

 

Links:

https://www.cnn.com/2017/09/05/opinions/russia-weaponize-ai-opinion-allen/index.html

https://www.eurasiantimes.com/cameras-drones-with-russia/#:~:text='Neural%20Network'%20that%20Can%20Identify%20Enemy%20Tanks,%2C%20and%20any%20other%20vehicles.%E2%80%9

Posted on February 15, 2024

DSML trend: OpenAI announces the most powerful text-to-video generator

Today (Feb 15) OpenAI unveiled its latest innovation: Sora, an AI-powered text-to-video generator. Numerous examples showcased on the OpenAI website

highlight Sora's capabilities, emphasizing that all videos were directly generated by the tool without any alterations. For instance, using a prompt describing a

scene of a stylish woman strolling through a neon-lit Tokyo street, Sora produces photorealistic output that is virtually indistinguishable from real footage.

That’s my take on it:

Having watched all the demonstrations, I am very impressed. Sora stands out as the most powerful text-to-video generator I've seen thus far. As an educator,

I see this technology as a blessing. Traditional video production typically demands proficiency in video editing software like Camtasia, Final Cut Pro, or iMovie.

Sora's groundbreaking capabilities level the playing field, making the creation and updating of instructional videos far more accessible.

However, this innovation also poses challenges for legal systems. In the past, videos served as crucial evidence for reconstructing events and determining

whether the accused is guilty or innocent. However, in an era where videos can be artificially generated, their credibility is called into question. I anticipate a

future where forensic investigation of video content becomes a distinct academic discipline.

Moreover, there are potential implications for the adult entertainment industry, as AI-generated content could reduce the need for human performers, thereby

cutting costs. While this may lead to fewer individuals being exploited in the porn industry, it raises ethical and regulatory concerns that demand urgent dialogue

among scholars of ethics and legal authorities.

 

Link to Sora:

https://openai.com/sora

Posted on February 14, 2024

DSML: Most popular programming languages in 2024

According to the 2024 February edition of TIOBE, currently Python, as expected, is the most popular programming language. The top 25 are shown as follows:

1.        Python

2.        C

3.        C++

4.        Java

5.        C#

6.        JavaScript

7.        SQL

8.        Go

9.        Visual Basic

10.   PHP

11.  Fortran

12.  Delphi/Object Pascal

13.  MATLAB

14.  Assembly language

15.  Scratch

16.  Swift

17.  Kotlin

18.  Rust

19.  COBOL

20.  Ruby

21.  R

22.  SAS

23.  Classic Visual Basic

24.  Prolog

25.  Ada

That’s my take on it:

It's important to note that this compilation includes all programming languages, regardless of their intended purposes and applications. While Python, C, and C++ are recognized

as general-purpose languages, others on the list serve specific domains, such as SQL, MATLAB, and SAS. SQL, for instance, is very powerful in the realm of database management

and data manipulation. According to indeed.com, SQL ranks as the top data science job skill in demand (see the attached screen capture). Similarly, MATLAB finds primary usage among

engineers and scientists for complex calculations, encompassing areas like linear algebra, statistics, and calculus, while SAS is widely employed in data analytics.

Notably, R cannot secure a position within the top 20. However, it might be premature to dismiss its usefulness. According to R-bloggers.com, graphical versions of the R language have

been gaining popularity within the scholarly community. In terms of the change in Google Scholar citation rates from 2019 to 2022, the fastest-growing data analytical software packages are 

BlueSky Statistics, Jamovi, and JASP. Remarkably, all three are essentially R being repackaged with a graphical user interface (see attached screen capture). As a JASP user myself, I

found that 90% of data analytical tasks can be accomplished without programming. In addition, this list does not offer a comprehensive view of the entire spectrum of data science software

applications. Take Tableau as an example. Tableau is the leading data visualization tool in industry. However, its operation predominantly relies on drag-and-drop and point-and-click interfaces,

with coding playing a minimal role.

Link:

https://www.tiobe.com/tiobe-index/

Posted on February 11, 2024

DSML Trend: Gemini’s generative art tool beats Adobe, Midjourney, Stable Diffusion in showing exact words

Today marks the third day since the introduction of Google Gemini, boasting numerous new and improved features. As a diligent researcher, I can only be convinced by evidence, and thus I continue to subject

Google Gemini to rigorous acid tests. Within the realm of generative art, Gemini has proven its merits. Notably, many existing generative art tools struggle with accurately depicting text within images. I conducted

a test by inputting the prompt "a sign showing the exact word 'Equal'" into Adobe, Midjourney, Stable Diffusion, and Gemini. Regrettably, the first three tools failed. Attached images depict their inability to render

the word "Equal" correctly. They are not even close! Conversely, Google Gemini delivered what I asked, producing four exquisite images featuring the sign with the precise word "Equal."

Posted on February 9, 2024

DSML Trend: Now Google Bard is Gemini

Earlier today, an email from Google caught my attention with this subject line: "Bard is now Gemini." Yesterday (February 8) Google unveiled a rebranding of Bard, its AI chatbot and assistant. This update introduces

subscription options and new apps. Bard has been renamed as Gemini. Additionally, for those seeking advanced features, Google offers access to Gemini Ultra 1.0, its most potent AI model, for a monthly fee of $19.99

through Google One. Furthermore, Google has introduced alternative methods to utilize its AI tool: Android users can install a dedicated Gemini app, while iPhone users can access Gemini within the Google app on iOS.

That’s my take on it:

I asked Gemini this question: “Now Bard is called Gemini. Besides changing the name, what else? Are there new features in Gemini?” The answer is:

Multimodal capabilities: Gemini expands beyond just text. We can now process and understand information from different formats like images, audio, and video, opening doors to richer interactions and applications.

Enhanced coding: Gemini has improved its understanding of code. We can generate higher-quality code in diverse programming languages, explain existing code more effectively, and assist with complex coding tasks.

Deeper data analysis: We've learned to delve deeper into data, making connections and uncovering insights beyond basic summaries. This unlocks potential for data-driven research and problem-solving.

Mobile App: A dedicated Gemini app is now available! You can experience our capabilities on the go, with the initial version offering text generation and translation features.

Exclusive Features (Gemini Advanced): For users opting for the advanced version, a range of exclusive features are planned, including enhanced multimodal capabilities, more interactive coding tools, and deeper data

analysis capabilities.

Scalability: Gemini comes in three sizes - Nano, Pro, and Ultra - catering to diverse needs and device environments. This increases accessibility and ensures smooth performance even on mobile devices.

Links:

https://www.cnbc.com/2024/02/08/google-gemini-ai-launches-in-new-app-subscription.html

Posted on February 9, 2024

DSML Trend: US announces AI Safety Institute Consortium (AISIC) with 200 members

On February 8, 2024, U.S. Secretary of Commerce Gina Raimondo unveiled the establishment of the U.S. AI Safety Institute Consortium (AISIC). Administered by the National Institute of Standards and Technology (NIST),

this Consortium aims to bring together AI creators and users, academia, government and industry researchers, and civil society organizations to ensure the development and deployment of secure and reliable AI. NIST

received an overwhelming response, with over 600 Letters of Interest from organizations spanning the AI stakeholder community and the United States. Now the consortium boasts a membership exceeding 200 companies

 and organizations, among which are notable entities, such as OpenAI, Alphabet's Google, Anthropic, Microsoft, Meta, Apple, Amazon, Intel, JP Morgan Chase, Bank of America, Cisco, IBM, HP, and Qualcomm.

That’s my take on it:

The United States has historically forged technology consortiums and alliances across various sectors in response to foreign competition, with results varying. For instance, in the 1980s, the creation of Sematech—a partnership

 between U.S. semiconductor companies and the government—aimed to reclaim leadership in semiconductors from Japan. But today the U.S. semiconductor sector is still outperformed by its Asian counterparts, such as TSMC.

Similarly, despite the formation of the U.S. Advanced Battery Consortium in 1991, today China has the upper hand in EV batteries. However, the landscape of AI differs significantly. The United States has maintained a leading

position in AI, with no imminent threat of foreign rivals. Hence, it is my contention that this consortium will further solidify the U.S.'s leadership in reliable, trustworthy and responsible AI.

Links:

https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute

https://9to5mac.com/2024/02/08/ai-safety-institute-consortium-apple/

Posted on February 4, 2024

DSML Trend: Google Bard (Gemini Pro) is ranked second by LMSYS

That’s my take on it:

In the competitive landscape of AI, numerical rankings only reveal part of the story. As a user of ChatGPT, Google Bard, and Claude, I found that both ChatGPT and Claude are more user-friendly than Google Bard. Specifically, while

I can directly copy and paste text from external sources into the input box of ChatGPT and Claude, pasted text in Google Bard transforms into an attached file, hindering editing. Additionally, in the context of rewriting and

paraphrasing, Google Bard tends to introduce excessive and redundant words and sentences, a concern not shared by the other two chatbots. Rather than fixating on the race for numerical superiority, it might be more valuable to

assess how the features of these chatbots benefit users in tasks such as writing, translation, data analysis, and code generation.

https://aibusiness.com/nlp/google-s-bard-just-beat-gpt-4-in-chatbot-rankings

Posted on February 2, 2024

DSML trend: Google’s Imagen 2 enters the AI generative art market

That’s my take on it:

In a direct comparison between Google's Imagen 2 and Midjourney using the same prompt, "A spaceship is exploring Europa, the moon of Jupiter," Google Bard's output seems to be disappointing. While Midjourney

generated images comparable to sci-fi fiction posters, Google's images lacked details and sophisticated designs. In one of the Google images, the shape of Jupiter is not a sphere (see attached). Another prompt, "a belly

dancer in a palace, hyper-realistic, tack sharp focus," yielded similar results. Midjourney delivered precisely what the prompt described—a hyper-realistic and sharply focused image—while Google's image quality fell short

(see attached). As expected, Google is a latecomer to this field, and thus further improvements can be anticipated.

Link: https://deepmind.google/technologies/imagen-2/

Posted on February 2, 2024

DSML trend: Comparing Gradient Boosting Machines and Neural Networks

On February 1, 2024, Jacky Poon, the head of actuarial and analytics at nib Group, authored an article comparing the advantages and disadvantages of Gradient Boosting Machines (GBM) and neural networks in tabular data

applications. While some asserted that GBMs outperform neural networks in predictions, Poon delved deeper, considering additional factors like interpretability, training time, and inference speed. His findings indicated that

there is no clear winner.

According to Poon, although neural networks are more resource-intensive, their ability to be trained with batch-loaded data enables processing of datasets too large for memory. Additionally, the inference speed comparison

between GBM and neural networks relies on the number of parameters used. In terms of deployment, Poon observed that both GBM and neural networks can be easily implemented. Ultimately, Poon encouraged users to

explore both approaches to identify the most suitable solution.

That’s my take on it:

Jacky Poon's article offers a fair and timely evaluation. Neural networks have historically been viewed as a last resort due to their demanding computing resources, complexity, and lack of transparency (Black box). However,

advancements in hardware, such as cloud-based high-performance computing, and improved algorithms have mitigated these concerns. Adhering to the methodology of inference to the best explanation, I consistently employ

model comparison to determine the optimal solution.

Link: https://www.theactuary.com/2024/02/01/data-science-lab-gbms-or-neural-networks

Posted on January 26, 2024

DSML trend: The most popular AI tools in 2022-2023

The 2024 Global Forecast Series report, released on January 24, 2024, highlights the most widely used AI tools of 2023, with popularity gauged based on the number of visits.

AI Tool Total Web Visits (Sept 2022 to Aug 2023) by percentage

1.     ChatGPT: 14.6B (60.2%)

2.     Character.AI: 3.8B (15.8%)

3.     QuillBot: 1.1B (4.7%)

4.     Midjourney: 500.4M (2.1%)

5.     Hugging Face: 316.6M (1.3%)

6.     Google Bard: 241.6M (1.0%)

7.     NovelAI: 238.7M (1.0%)

8.     CapCut: 203.8M (0.8%)

9.     JanitorAI: 192.4M (0.8%)

10.  CivitAI: 177.2M (0.7%)

That’s my take on it:

This distribution is extremely skewed, with ChatGPT dominating by a percentage greater than the combined total of all others (The winner takes all, almost). Surprisingly, Claude.AI does not rank within the top ten. Claude

surpasses ChatGPT on certain benchmarks but falls short on others. In terms of the Massive Multitask Language Understanding (MMLU) score, both Claude 1 and Claude 2 outperform ChatGPT 3.5 (77 and 78.5 versus 70).

However, the paid version of ChatGPT (version 4) demonstrates superiority over Claude with a score of 86.4. Notably, Claude generates responses from a closed database, lacking access to the internet for updates, while the

Bing version of ChatGPT serves as an active search engine. Looking ahead, I anticipate ChatGPT will maintain its popularity over other models in the foreseeable future.

However, the findings of this report diverge from another source. As per Everypixel.com, until August 2023, Stable Diffusion has generated 12.5 billion images, surpassing Midjourney's 964 million. However, variations in

measurement indices can yield disparate outcomes.

Links:

https://journal.everypixel.com/ai-image-statistics

Posted on January 19, 2024

During the World Economic Forum held in Davos, Switzerland, AI emerged as a hot topic of discussion. For example, Jeff Maggioncalda, the CEO of Coursea, highlighted that on the average every minute a new user enrolls its

AI course in 2023. Coursera aims to collaborate with leading AI players, such as OpenAI and Google's DeepMind, to offer comprehensive AI courses. Despite initial investor concerns that generative AI apps might replace ed-tech

firms, the technology has actually spurred increased upskilling, benefiting platforms like Coursera.

At the same forum, UN Secretary-General António Guterres expressed concerns about the heightened risks of unintended consequences associated with AI. He urged the tech industry to collaborate with governments in

 establishing regulations and guidelines for responsible AI development. Additionally, Guterres acknowledged the enormous potential of AI for sustainable development, but cited a recent warning from the International Monetary

 Fund, suggesting that AI could exacerbate existing inequalities.

That’s my take on it:

The apprehension regarding unintended consequences from technology is not a new phenomenon. Going back to 1816, Mary Shelley's novel "Frankenstein" raised ethical concerns about scientific experimentation and its potential

undesirable outcomes. I share the belief that AI, being a powerful force, requires careful consideration and regulation to avoid creating something like "Frankenstein." This is why I advocate for the inclusion of a data ethics course

in data science programs.

While acknowledging the potential risks, I view AI as a liberating and equalizing tool. Contrary to worsening inequality, the surge in individuals taking AI courses, as noted by Jeff Maggioncalda, suggests a leveling of the playing

field. For example, AI has democratized the creation of high-quality images and videos, eliminating the need for substantial financial investments in professional studios. In the past, affluent and middle-class parents had the means

 to hire personal tutors for their children, while disadvantaged children lacked such opportunities. But today, any student with access to a computer can leverage tools like ChatGPT, Claude, Google Bard, or similar AI technologies

for personalized tutoring. Hence, AI improves equality!

Links to articles about World Economic Forum:

https://www.reuters.com/technology/chatgpt-effect-coursera-sees-signups-ai-courses-every-minute-2023-2024-01-18/

https://www.theguardian.com/business/2024/jan/17/big-tech-firms-ai-un-antonio-guterres-davos?utm_source=ONTRAPORT-email-broadcast&utm_medium=ONTRAPORT-email-broadcast&utm_term=Newsletter&utm_content=Data+Science+Insider%3A+January+19th+%2C+2024&utm_campaign=20012024

Posted on January 12, 2024

DSML trend: AI is everywhere in CES

The Consumer Electronics Show (CES) was held in Las Vegas last week. As you expect, the convention was dominated by AI innovations. J.H. Han, CEO and head

of the device experience division at Samsung, emphasized the transformative impact of AI on various industries, making lives more convenient and inclusive. From

giant televisions to robots, electric vehicles, and foldable phones, AI integration was evident in a wide range of products. Notably, some companies showcased

products capable of detecting not only human emotions but also those of pets. LG even presented an AI-powered robot companion with the ability to call for an

ambulance in case of a fall at home.

That’s my take on it:

I didn't personally attend CES, and therefore my impressions are based on second-hand information. It appears that the event featured a mix of groundbreaking

developments, conceptual products, and potentially unnecessary innovations. For instance, AI software startup Capella introduced an app, priced at $10 per month,

claiming to interpret a baby's cries with "95% accuracy" to determine if they are hungry, need a diaper change, or are uncomfortable. The practicality of such

technology raises questions. In my opinion, any loving parents can perform these assessments intuitively. Another example is the $3,500 Perfecta grill from British

startup Seergrills, which supposedly uses AI to cook perfect steaks and other meats in just 90 seconds. Again, I argue that any experienced cook can achieve similar

results without using such costly equipment.

Nevertheless, the diversity of ideas presented at CES reflects the innovation and experimentation within the tech industry. Ultimately, the market will decide the fate

of these concepts, determining which ones thrive and which become useless.

Links:

https://www.cnet.com/tech/ai-is-dominating-ces-2024-you-can-blame-chatgpt-for-that/

Posted on December 15, 2023

DSML Trend: DeepMind’s FunSearch discovers new knowledge

 

Recently researchers from Google DeepMind claim a groundbreaking achievement—the world's first scientific discovery facilitated by a large

language model. This breakthrough suggests that technologies like ChatGPT have the potential to generate information surpassing human

knowledge. The development, called "FunSearch" (short for "searching in the function space"), leverages a Large Language Model (LLM) to

devise computer program solutions for various problems. Paired with an "evaluator" ranking the program performances, the best solutions

are amalgamated and fed back into the LLM, propelling an iterative process that transforms weak programs into robust ones capable of

unveiling new knowledge.

 

In a noteworthy accomplishment, AI, through FunSearch, tackled a longstanding mathematical challenge—the cap set problem. This problem

involves identifying the most extensive set of points in space where no three points align in a straight line. FunSearch produced programs

generating new large cap sets, surpassing the best-known solutions devised by mathematicians.

 

This is my take on it:

During an interview with CNN, renowned physicist Michio Kaku derided chatbots, likening them to a "glorified tape recorder." However,

contrary to the criticism that chatbots merely recycle existing data without generating new knowledge and are prone to confabulation, the

preceding advancement showcases the potential for AI to contribute to knowledge creation. In addition, while chatbots lack the ability to

conduct original research or independent experiments, they can aid in hypothesis generation. By sifting through vast datasets, recognizing

patterns, and formulating hypotheses, AI can offer valuable insights. For instance, analyzing medical records for potential symptom-disease

relationships or studying financial data to predict market trends—akin to the Swanson process—demonstrates AI's capacity to contribute

meaningfully to the creation of new knowledge.

Link to article:

https://www.theguardian.com/science/2023/dec/14/ai-scientists-make-exciting-discovery-using-chatbots-to-solve-maths-problems?utm_source=ONTRAPORT-email-broadcast&utm_medium=ONTRAPORT-email-broadcast&utm_term=Newsletter&utm_content=Data+Science+Insider%3A+December+15th%2C+2023&utm_campaign=16122023

Posted on December 8, 2023

DSML trend: Google Gemini may outperform ChatGPT

Gemini released by Google two days ago (December 6) is considered a quantum leap in AI innovation. Gemini comes in three versions tailoring for specific tasks. Gemini Ultra is the most powerful variant intended for handling incredibly complex tasks with its multimodal capabilities, whereas Gemini Pro is designated for powering Google's consumer-level products operating in the cloud, such as Google Bard and other PaLM2 products. Last, Gemini Nano is specifically crafted to operate natively on mobile devices, such as cell phones. According to Google, Ultra demonstrated superior performance compared to "state-of-the-art" AI models, including ChatGPT's most advanced model, GPT-4, across 30 out of 32 benchmark tests. Additionally, the Pro model surpassed GPT-3.5, the underlying technology for the free-access version of ChatGPT, in six out of eight tests. The driving force behind Gemini is DeepMind co-founder Demis Hassabis, who advocates the integration of LLMs and other AI techniques to enhance comprehension.

That’s my take on it: While I haven't personally experimented with Gemini yet, third-party analyses suggest that it has the potential to surpass ChatGPT. First, in contrast to conventional large language models (LLMs) that are predominantly text-centric, Gemini stands out as a natively multimodal model, displaying proficiency in learning from a diverse array of data sources, including audio, video, and images. This breakthrough transcends the text-focused constraints of LLMs, hinting at a potential paradigm shift in the capabilities of AI products. Second, Gemini reportedly undergoes training on more extensive datasets of text and code, ensuring that the AI model stays updated with the latest information and can provide accurate and highly reliable responses to queries. Moreover, the model can also generate hypotheses for further research, a capability that experts believe could revolutionize scientific discovery and potentially lead to breakthroughs in fields such as technology and medicine.

https://www.theguardian.com/technology/2023/dec/06/google-new-ai-model-gemini-bard-upgrade

https://www.youtube.com/watch?v=lgBAS9CFYlE

Posted on November 21, 2023

DSML trend: New and updated features of Bard level the playing field

Recently Google Bard announced several new or updated features. For example, “Get help with math equations: Stuck on a math problem? Ask Bard for a step-by-step explanation of how to solve the equation. You can even take a photo of the question and upload it instead of typing it out.”

“Charts & graphs to visualize data: Bard can now generate charts from data or equations you include in your prompts or from tables that Bard generates during your conversations. We even made a graph resembling the Bard sparkle while playing around with this new feature!”

That’s my take on it:

I tested the feature by entering the following problems “Solve 9^(2x-5) = 27^x.  Explain the solution step by step.” After a few seconds, Bard presented two versions of the solution, and both were correct. More importantly, it explained the procedure step by step. In my opinion, the explanation is even clearer than the textbook. If I need further explanation, Bard can provide additional information. For instance, after entering “Please explain the power of a power rule used in Step 2,” Bard illustrated more details.

A chatbot like Bard has the potential to serve as your personalized and intelligent tutor, catering to your individual learning pace, academic proficiency, and preferred learning style. In the past, parents had to invest significantly in hiring private tutors or enrolling their children in intensive boot camps. However, the playing field has been leveled, and now access to a personalized tutor is just a few clicks away for virtually anyone. I wish I had the opportunity to access this technology when I was a child! If so, my knowledge could have tripled, and I might have earned three Ph.Ds!

Google Bard:

https://bard.google.com/chat

Posted on November 17, 2023

DSML trend: OpenAI fires Sam Altman

It happened just now. Today (November 17, 2023) OpenAI's board of directors announced that Sam Altman will be stepping down as CEO, with technology chief Mira Murati set to take over the position. The decision comes after a thorough review process, during which the board determined that Altman's communication lacked consistent truthfulness, thereby impeding the board's ability to fulfill its responsibilities. The statement emphasized that due to this, the board no longer has confidence in Altman's capacity to effectively lead OpenAI. The board also announced that Greg Brockman, OpenAI’s president will be stepping down as chairman of the board but will keep a role at the company.

That’s my take on it: As of now, Altman has not issued any public response yet. Given the maturity of the technology, I believe the departure of both Altman and Brockman will likely have minimal impact on the development of OpenAI or the broader field of generative AI. However, it's improbable that Altman will sit there and do nothing. It is possible that he may embark on launching another startup or join a competitor to OpenAI (e.g. Claude or Google Bard?)

News: https://www.cnbc.com/2023/11/17/sam-altman-leaves-openai-mira-murati-appointed-interim-boss.html

Posted on November 17, 2023

The Harvard Business Review featured an article on November 2, 2033, titled "How Cloud Technology is Transforming Data Science." Written by Peter Wang, the CEO and co-founder of Anaconda, the article discusses the impact of cloud computing on data science practices. Wang highlights how cloud platforms, such as IBM Watson and Tableau, are revolutionizing the field by offering scalable computational resources and enhancing workforce agility. These cloud-based analytics tools empower teams to access information and collaborate in real time, facilitating quicker insights and problem-solving. Moreover, cloud computing promotes inclusivity in data science by providing smaller entities, such as startups and small teams, with the means to innovate on par with larger corporations. The cloud's collaborative capabilities extend to distributed data science teams, enabling effective collaboration irrespective of geographical constraints. While the cloud brings forth significant advancements in data science, it also introduces new challenges, particularly in data privacy and security. To address these concerns, Wang emphasizes the importance of employing techniques like data partitioning, encryption, and robust frameworks for mitigation.

That’s my take on it: Given the significance of cloud computing, it is undoubtedly essential to integrate it into the curriculum of data science education. However, the current landscape of the cloud computing market is highly diverse, featuring numerous vendors such as AWS, Google Cloud, Microsoft Azure, IBM Watson, and more. This question arises: should cloud computing training be tailored to specific vendors or remain vendor-independent? Opting for vendor-specific training allows students to gain practical experience with the tools and services of major cloud providers like AWS, Azure, and IBM Watson, preparing them for roles utilizing these platforms. The drawback is that knowledge becomes less transferable if students later work with a different cloud provider. On the other hand, adopting a vendor-independent approach ensures knowledge transferability across various cloud platforms and avoids explicit promotion of specific vendors within the program. However, graduates may need additional, vendor-specific training upon entering the workforce. Striking a balance between these approaches is crucial to provide students with a well-rounded and adaptable skill set in the dynamic field of cloud computing. What do you think?

Full article: https://hbr.org/2023/11/how-the-cloud-is-changing-data-science?utm_source=ONTRAPORT-email-broadcast&utm_medium=ONTRAPORT-email-broadcast&utm_term=Newsletter&utm_content=Data+Science+Insider%3A+November+17th%2C+2023&utm_campaign=18112023

Posted on October 19, 2023

On October 16, Baidu, the Chinese search engine giant, unveiled its updated large language model, known as Ernie 4. They asserted that it is on a par to OpenAI's GPT 4 in terms of performance, although it is not yet accessible to the public. Additionally, Baidu introduced a new AI-based product called Baidu GBI, developed from the ground up to provide support for natural language interaction and handle cross-database analysis, among other functions. According to Baidu's CEO Li, this product has the capability to complete data analysis tasks that would take humans several days in just a matter of minutes.

That’s my take on it:

Baidu had previously released Ernie 3.5 in June, claiming its superior performance compared to OpenAI's ChatGPT 3.5 and even surpassing GPT 4 in certain Chinese-language skills. These assertive statements should undergo validation through objective benchmark tests by independent parties.

Full report: https://finance.yahoo.com/news/baidu-says-ai-chatbot-ernie-093000101.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAE5TgYsRNuuI8tqvuAdTuTlhUfPwDoUBK0Lqn7T3wds300KRdiCn07yvxheXmcF_B9oYQvNKHelI8pJ5IZguMCKXX6nmqx38LMinQTTbHJlY5ZLy0dyIWs4RE-EDa8DaC-2QlyMxwSzOaf1mwniCaUSEar4f2AWL72gf1wWb_x9i

Posted on October 13, 2023

Yesterday (10/12) an article published by Analytics Insight detailed how seven data science positions can be executed without the need for programming skills. Instead, they rely on the capabilities of user-friendly software tools like Tableau, Excel, Power BI, and more. These positions are:

·      Data analyst

·      Business Intelligence Analyst

·      Data Consultant

·      Market Research Analyst

·      Data Visualization Specialist

·      Data-driven Strategist

·      Data Product Manager

That’s my take on it:

No code solutions provide pre-built components, templates, and graphical user interfaces (GUI) that can accelerate development compared to programming. These tools allow users to focus more on the research question, the data, and the business logic rather than the syntax. However, no-code solutions inevitably involve some trade-offs in terms of flexibility, customization ability, scalability and performance compared to coding. Data science education should balance both sides. In my humble opinion, starting data science training with a focus on programming right away might not be the most advisable approach. Emphasizing the fundamental concepts as the foundation is crucial, while the tools, which serve as means to an end, should be treated as secondary. Leveraging GUI-based software applications reduces the entry barriers into the field, thereby broadening the pool of potential talents.

Article:

https://www.analyticsinsight.net/7-data-science-jobs-you-can-do-without-any-coding-skills/?utm_source=ONTRAPORT-email-broadcast&utm_medium=ONTRAPORT-email-broadcast&utm_term=Newsletter&utm_content=Data+Science+Insider%3A+October+13th%2C+2023&utm_campaign=14102023

Posted on September 29, 2023

DSML trend: Meta and OpenAI announced new features simultaneously

On September 27th, Meta unveiled its latest artificial intelligence (AI)-powered creation, Meta AI. This new digital assistant is Meta's response to OpenAI's ChatGPT and is set to seamlessly integrate with Instagram, Facebook, WhatsApp, and, in the future, Meta's mixed reality devices. Beyond merely answering questions and engaging in conversations with users, this freshly introduced bot boasts a remarkable ability to generate images. This image generation capability is harnessed through a novel tool known as Emu, which Meta has diligently trained on a vast dataset of 1.1 billion pieces of data, including user-shared photos and captions from Facebook and Instagram. Rather than pursuing a one-size-fits-all approach, Meta's overarching strategy involves crafting distinct AI products tailored for various use cases.

On the very same day, OpenAI made an exciting announcement regarding its chatbot, ChatGPT. It revealed that ChatGPT would no longer be constrained by pre-2021 data. Users now have the option to explore GPT-4 via a novel feature called "Browse with Bing." For instance, if you snap a photo of your home refrigerator's contents, ChatGPT can provide recipe suggestions. Similarly, if you photograph your children's math homework, ChatGPT can assist in solving mathematical problems. Furthermore, OpenAI is set to enable ChatGPT to engage in voice conversations with users and interact with images, bringing it closer to an AI voice assistant akin to Apple's Siri.

That’s my take on it:

Ultimately, these AI tools hold the potential to enhance human well-being and satisfaction. However, will our happiness increase when AI can assist us in generating images on Facebook, preparing a recipe, or solving complex math problems? When I traveled to Europe, I found that many European drivers preferred the manual transmission system to the automatic one, as they believe the former provides a more fulfilling driving experience while the latter deprives them of the fun of driving. By the same token, if everything becomes too easy with the help from AI, will we feel “losing” some enjoyable experiences? The psychological impact of AI on humans is a topic that warrants extensive study.

Full text:

https://cointelegraph.com/news/meta-ai-ceo-mark-zuckerberg-unveil-chatbot-rayban-metaverse