AWS announces a new cloud service & Google opens up PaLM-E. AI/ML Digest #0

AWS announces a new cloud service & Google opens up PaLM-E. AI/ML Digest #0

Hi everyone!

Today, I am pleased to present you the first English language issue of our digest on Big Data, Artificial Intelligence, and Machine Learning news and trends. But before we get into the details, here is a quick introduction: I am Vova Kyrychenko, CTO at @Xenoss.

You may logically ask: why am I doing this digest, who will be interested in it, and why is it worth reading? :) Let me try to answer these initial questions briefly.

The thing is that every day, my work as a CTO is closely related to the development and improvement of technical solutions our company offers. As we are clearly focused on the dynamic and technically demanding MarTech/AdTech domain, many of these solutions widely integrate ML/AI achievements: things like natural language processing, predictive and augmented analytics, computer vision, and deep neural networks. This is what sparked my interest in the field and motivates me to actively monitor its development.

Who will be interested in this digest? When creating it, I focus on a wide range of readers and try to balance the materials so that everybody, including ones with non-technical specialties, can find interesting information for themselves. So there are links to hardcore research with detailed code examples and analysis of language models as well as review articles and updates that will help you keep up with the latest industry events.

Well, now I suggest you take a closer look at the topics I have prepared for you in this issue:

  • AWS announces a new cloud service and makes its CodeWhisperer generally available 
  • Google opens up PaLM-E, an LLM for robots, to address a lack of large-scale datasets
  • Nvidia’s CEO shares the company’s story and the results of its entry into the AI mark
  • The AI expert Casey Greene talks about healthcare applications of artificial intelligence

I hope you will find this digest interesting. By the way, one reminder: liking, commenting, and sharing it will speed up the release of the next issue ;)

Thank you, and enjoy reading!

Articles about AI/ML

Stability AI, the company that created the Stable Diffusion image generator, has released a suite of open-source LLMs, StableLM. So developers will be able to use and adapt the models. StableLM is meant to generate text and code and trained on a larger version of the open-source dataset, the Pile, which embraces info from Wikipedia, Stack Exchange, PubMed, and more.

Elon Musk is going to deploy TruthGPT, the AI-based product that could rival Microsoft and Google. He is also launching a new company, X.AI, which he founded in March. While information about this project is scarce, Musk admits to amassing powerful computer hardware to work on generative artificial intelligence, the technology that underlies chatbots like ChatGPT.

After criticism for its “closed-door policy,” OpenAI is making a change: its Consistency Models technology will now be available in open source. The article about the new class of generative models was published in March, and it may signal the next stage in the AI artistry race. With this move, DALL-E (developed by this company) could perform as a standout from competitors.

Now a few generative AI updates from Amazon Web Services (AWS). First will be the new Amazon Bedrock cloud service. Its initial set of supported models includes AI21, Anthropic, and Stability AI. There is also a fresh set of models known as Amazon Titan. Finally, news about Amazon CodeWhisperer: it has become publicly available and free for individual developers.

Synthesis AI, specializing in synthetic data technologies, has found a new technique to generate three-dimensional digital humans from text prompts. Synthesis AI uses generative AI and visual effects pipelines to convert text to 3D, making high-resolution, cinematographic-quality digital humans that can be applied in areas such as gaming, virtual reality, film, and simulation.

Atlassian will use OpenAI technology to add artificial intelligence features to a variety of collaborative software. In particular, the GPT-4 model will enable Jira to process employees’ support requests in Slack. What’s more, Confluence will let users get generated explanations of terms found in documents or automatic responses to questions based on data from these docs.

Microsoft is developing its own AI chips that will work for training large language models and staying away from cost-consuming dependence on Nvidia. According to The Information, the corporation has been secretly producing the chips since 2019, and some Microsoft and OpenAI employees check out their compatibility with the latest large language models like GPT-4.

The European Parliament is tasked with measures on AI exploitation, like requiring chatbot developers to inform if they apply copyrighted content, allowing creators to claim payment. It is also offered to hold developers responsible for abuses, not smaller companies that use their technologies. There is a controversial idea, too, e.g., a total ban on face recognition in public.

Scientific publications

Google researchers present the Universal Speech Model (USM), a family of 2B-parameter speech models trained on 12M hours of speech and 28B sentences of text in 300+ languages. USM is designed to be used on YouTube, e.g., for subtitles, and can automatically recognize common languages like English and ones with limited resources like Amharic, Azerbaijani, etc.

A new scientific achievement of Meta AI: MuAViC (Multilingual Audio-Visual Corpus) is the first-ever benchmark that allows using audiovisual learning for high-precision speech translation. MuAViC was used to train Meta AI’s self-supervised framework, AV-HuBERT, designed to translate speech in an environment with background noise, where it outperforms other models.

Stanford researchers have investigated the ability of language models to interpret and generate uncertainty expressions. In various situations, info is often ambiguous, and uncertainty expressions are a source of nuance that helps decision-making. So training models with uncertainty rather than certainty can improve their calibration without sacrificing accuracy.

In domains where sensitive info is present, it is difficult to release high-utility data for ML that protects individual privacy. That’s why Amazon researchers have introduced a data release framework called 3A (approximate, adapt, anonymize) to maximize data utility while maintaining differential privacy through anonymization implemented with a noise-adding mechanism.

Google has developed PaLM-E, an LLM for robots that solves the problem of lacking large-scale datasets by transferring knowledge to a robotic system. Unlike its predecessor, PaLM, this model is supplemented with robot sensor data. Instead of relying on textual input alone, PaLM-E can take raw data directly from sensors, increasing robot learning efficiency.

Scientists from Japan propose a new method for reconstructing images based on human brain activity obtained from fMRI. It is based on Stable Diffusion, a latent diffusion model, which reduces computational cost while maintaining generative performance and reconstructs high-resolution images without extra training or fine-tuning of complex deep learning models.

Videos

In this interview, Nvidia’s founder and CEO, Jensen Huang, talks about the path his company has taken: from start to market leadership in GPUs, gaming, and now artificial intelligence. In addition, Jensen explains how the corporation has dealt with export controls imposed on China and geopolitical tensions around Taiwan, where most of Nvidia’s chips are manufactured.

The writer and mathematician Hannah Fry explores emotion recognition technologies, from a Scottish pig farm where scientists analyze animal expressions to Silicon Valley. Hannah will try to find out what consequences such innovations can lead to and whether artificial intelligence will make us more vulnerable by reducing our privacy or perhaps will protect society in this way.

Generative artificial intelligence never ceases to amaze us with its capability to create realistic images, code, and dialogues. But have you ever asked yourself how it is possible? Kate Soule, Senior Manager of IBM’s Exploratory AI Research team, tells how one form of generative AI, namely large language models, works and what potential value it brings to businesses.

Natasha Crampton, Chief Responsible AI Officer at Microsoft, shares the corporation’s approach to AI based on 6 principles: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. In addition, Natasha emphasizes the obvious need for safeguards that will help protect this technology from abuse and harmful exploitation.

At 2023’s Mobile World Congress in Barcelona, the Wall Street Journal’s tech columnist Joanna Stern met with Carme Artigas, Spain’s Secretary of State for Digitalization and AI. Among other things, the conversation touched on European officials’ concerns about regulating AI in light of its new abilities, Spain’s areas of focus in AI development, and EU rules on artificial intelligence.

Artists Kate Crawford, Trevor Paglen, and Refik Anadol tell how AI and ML algorithms require new approaches to art making, with Paola Antonelli and Michelle Kuo providing historical context. In this video, you will hear where art can be led by the development of AI and how artists are responding to AI breakthroughs and using this technology for creative purposes.

Kevin Stratvert, Microsoft’s former product manager, demonstrates how to change your voice into 10,000+ different versions using artificial intelligence. Voice.ai can make you sound like thousands of celebs and allows uploading voices to it. You may record your voice first and then modify it, or speak and change your voice in real time to use it in apps like Discord or Skype.

Podcasts

Anthropic, an AI startup, intends to raise up to $5 billion over the next two years to surpass its competitor OpenAI. Meanwhile, developers are trying to create an “autonomous system” by combining several instances of the GPT model. Google researchers, in turn, propose a deep learning system for robots to sort waste in office buildings. And this is only a part of the news.

Nina Schick, author of the book Deepfakes, and Eric Schwartz, head writer at Voicebot.ai, look at pressing AI issues. For example, they converse about a ChatGPT ban in Italy and actions of other countries regarding this product, compare it with other AI inventions, speak of ways to combat deepfakes, and mention a new generative AI model from Meta, Segment Anything.

Although Google was a pioneer in AI, it has recently been losing ground in this area. Miles Kruppa, a reporter at the Wall Street Journal, explains why the tech giant has become more cautious about chatbots and what is at stake now that Microsoft has overtaken it in the market. Miles recalls the fiasco that Bard bot’s release turned into and the company’s history overall.

Based on the materials one posts on social media, modern generative AI can create a “puppet version” of their voice. In the same way, it is possible to imitate public officials’ voices and thus generate credible deepfakes. The podcast host Lizzie O’Leary talks about AI as a tool in the hands of malicious actors with Pranshu Verma, a tech reporter at the Washington Post.

Computational biology and AI expert Casey Greene discusses ethical issues surrounding AI, the development of biobanks, personalized medicine, the use of technology for better patient care, and general skepticism about the efficiency of AI in healthcare. He also answers a fancy question: “How are chihuahuas and blueberry muffins connected, and how does AI relate to it?”

Tamryn Kerr is the creative director of the newly-minted advertising agency The Hijinks Collective. Her company has recently used AI to generate images of a potential earthquake in the UK to help support news coverage of events in Turkey and Syria and raise funds for the UN. This episode is about how AI, in the hands of creative professionals, can serve a good cause.

______

Gratitude for reading the digest. To stay updated on further ML/AI developments and, more importantly, not miss the next issue, you are welcome to follow our social media channels: LinkedIn and Facebook. See you soon!


11 g temu
CEO Nvidia mówi, że dzieci nie powinny uczyć się kodowania — powinny zostawić to AI
Eksperci w dziedzinach takich jak biologia, edukacja, produkcja, rolnictwo itp. mogliby zaoszczędzić czas, którego mogliby potrzebować na naukę programowania komputerowego, na bardziej produktywne poszukiwania
Kwi 26
Google szykuje nową wyszukiwarkę opartą o AI
Projekt nie ma „jasnego harmonogramu”, jednak warto wiedzieć, że Google opracowuje również pakiet nowych funkcji AI dla swojej istniejącej wyszukiwarki pod kryptonimem „Magi”.
Lis 9
Sztuczna inteligencja w opiece zdrowotnej: wyzwania i rozwiązania
W tym artykule przyjrzymy się niektórym przeszkodom stojącym na drodze do pomyślnego starzenia się i zaproponujemy, jak można je pokonać za pomocą AI.

Ta strona używa plików cookie, aby zapewnić Ci lepsze wrażenia podczas przeglądania.

Dowiedz się więcej o tym, jak używamy plików cookie i jak zmienić preferencje dotyczące plików cookie w naszej Polityka plików cookie.

Zmień ustawienia
Zapisz Akceptuj wszystkie cookies