Discover the latest advancements in AI, from a never-ending UI learner to generative design and scientific discovery, along with AI’s potential benefits and considerations in healthcare.
Welcome to Hidden AI Daily. Let’s dive right in.
Apple and Carnegie Mellon Collaborate on Never-Ending UI Learner

NeverEnding UI Learner –
Revolutionizing App Accessibility
RESEARCHERS UNVEIL THE NEVER-ENDING UI LEARNER: REVOLUTIONIZING APP ACCESSIBILITY THROUGH CONTINUOUS MACHINE LEARNING
News from Apple and Carnegie Mellon University have collaborated to develop an AI system called the NeverEnding UI Learner.
This innovative system aims to revolutionize app accessibility through continuous machine learning. Traditional approaches to improving user interfaces, UIs, rely on static data sets of screenshots that humans have rated. However, this method is both expensive and prone to errors.
The NeverEnding UI Learner takes a different approach by constantly interacting with real mobile applications to enhance its understanding of UI design patterns and trends. The AI system autonomously downloads apps, explores them, and performs interactions such as taps and swipes to gather data for training computer vision models. By not requiring human-labeled examples, it can predict the tappability and draggability of UI elements and the similarity between screens.
One significant advantage of this system is its ability to identify challenging circumstances that human-labeled data sets may overlook. Interacting with live apps and observing the results continuously improves its models. After five training rounds, the tappability prediction accuracy reached an impressive 86%.
This never-ending learning capability opens up possibilities for more sophisticated representations of mobile UIs and interaction patterns.
AI-Designed Jellyfish Robot Collects Microplastics

Generative AI in Engineering Design –
Jellyfish Robot
HOW GENERATIVE AI IS REVOLUTIONIZING ENGINEERING DESIGN
In recent years, generative AI has become integral to engineering design processes. Companies like Aston Martin and NASA have embraced this technology to create more efficient and innovative designs. But what about using generative AI to imagine a better robot?
According to an article from IEEE Spectrum, one researcher did just that. They used image generators powered by AI to co-design a self-reliant jellyfish robot to collect ocean microplastics. The process involved providing prompts to the AI and refining the concepts based on the generated images. The use of generative AI in this project allowed for the exploration of new design possibilities and sparked creative thinking about the form and locomotion of the robots. Even unsuccessful results from the AI generator provided valuable insights for engineering design.
However, using generative AI in co-creation does come with its challenges. The results can be unpredictable and difficult to control, requiring perseverance and dedication from designers. Ethical concerns arise regarding biases, misinformation, privacy, and intellectual property rights. Despite these challenges, generative AI tools have the potential to help engineers design innovative solutions that improve people’s lives. By exploring future system designs and thinking differently, we can create sustainable solutions for a better future.

Scientists Launch Polymathic AI
SCIENTISTS BEGIN BUILDING AI FOR SCIENTIFIC DISCOVERY USING TECH BEHIND CHATGPT
An international team of scientists, including researchers from the University of Cambridge, have launched a research collaboration called Polymathic AI. Polymathic AI aims to leverage the technology behind CHAT-GPT to build an AI-powered tool for scientific discovery. This initiative is set to change how AI and machine learning are used in science.
The team behind Polymathic AI plans to train their AI tool using numerical data and physics simulations from various scientific fields. By starting with a pre-trained model like CHAT-GPT, they hope to accelerate the process and improve accuracy compared to building a scientific model from scratch.
One unique aspect of this project is its collaboration with the Simons Foundation, which has provided valuable resources for prototyping these models for basic science research. The team comprises physics, astrophysics, mathematics, artificial intelligence, and neuroscience experts.
Polymathic AI aims to uncover commonalities and connections between different scientific fields by learning from diverse sources across physics and astrophysics. The goal is to connect seemingly disparate subfields into a unified approach.
Furthermore, the team’s long-term vision is to democratize AI for science by making everything public and eventually providing a pre-trained model that the broader scientific community can access. To ensure accuracy, Polymathic AI will treat numbers as actual numbers during training and use real scientific data sets, capturing the underlying physics of the cosmos.
AI in Healthcare: Revolutionizing Medicine

Unlocking AI’s Potential in Healthcare
UNLOCKING AI’S POTENTIAL IN HEALTHCARE
In recent years, healthcare systems have transformed digitally, opening up new possibilities for using artificial intelligence, AI, in medicine. According to Unite AI, AI has the potential to revolutionize healthcare by enhancing capabilities in diagnosis, treatment, and healthcare operations.
One key aspect of utilizing AI in healthcare is data. Healthcare data can be divided into two categories: health data and operations data. Health data includes unstructured notes and measures of clinic wait times, while operations data focuses on resource allocation and optimizing telemedicine operations. However, it is essential to approach the integration of AI ML into healthcare with caution. Data biases within AI algorithms can perpetuate societal biases and harm specific populations.
Additionally, noise within the data can mislead AI algorithms and lead to flawed insights. It’s also crucial not to overlook qualitative aspects of healthcare by relying solely on quantifiable data. Automation offers efficiency in healthcare tasks, but unquestioningly trusting AI without human oversight can be disastrous. A phased approach should be adopted to mitigate risks, starting with low-stakes tasks before escalating cautiously to high-risk areas where human oversight is vital.
Integrating AI into healthcare requires meticulous planning and oversight to ensure patient care is not compromised. Regular updates and training on contemporary data are essential for maintaining the relevance and efficacy of AI-driven medical solutions, including content generation, assistive healthcare, interactive chatbots, and content moderation.
Ukrainian Killer Drones Strike

AI Attack Drones: A New Era of Warfare?
AI ATTACK DRONES MAY BE KILLING WITHOUT HUMAN OVERSIGHT
In a groundbreaking development, Ukrainian attack drones equipped with artificial intelligence reportedly carry out attacks independently, without human oversight. These autonomous weapons have been designed to target vehicles such as tanks and have the potential to cause harm to Russian soldiers.
If these reports are accurate, this would mark the first confirmed case of killer robots in action. The use of AI-powered attack drones raises ethical concerns and questions about accountability. Without human decision-making, there is a risk of unintended casualties or mistaken targets. It also opens up possibilities for other countries to develop similar technologies, leading to an escalation in autonomous warfare.
However, it’s important to note that no casualties have been confirmed. This raises another question: are these attacks accurate and effective? While AI has advanced significantly in recent years, there may still be limitations and errors that could impact the success rate of these autonomous drones.
This development highlights the need for international regulations and discussions surrounding the use of AI in military applications. As technology advances rapidly, we must address the ethical implications and establish guidelines to ensure responsible deployment of autonomous weapons.
AI Breakthrough: Arabic Language Inclusion

Inclusion of Arabic in NLP –
An AI Breakthrough
THE UNIVERSITY OF SHARJAH RESEARCHERS DEVELOP ARTIFICIAL INTELLIGENCE SOLUTIONS FOR THE INCLUSION OF ARABIC AND ITS DIALECTS IN NATURAL LANGUAGE PROCESSING
Recently, researchers at the University of Sharjah have been working on developing artificial intelligence solutions for including Arabic and its dialects in natural language processing. Despite being the fifth most extensively used language globally, Arabic has been largely overlooked in this field.
Arabic is known for its complexity and richness, with a root-based word-formation system, multiple word forms, and lack of diacritics and vowels. Additionally, Arabic dialects vary significantly, making it challenging to build models that can understand and generate text in multiple dialects.
To address these challenges, the researchers developed a deep learning system using a large and diverse data set of Arabic dialects. Training NLP systems on this data set enhanced chatbot performance by understanding various Arabic dialects accurately.
This research has caught the attention of major tech corporations like IBM and Microsoft, who have shown interest in incorporating these AI solutions into their systems. The applications for this technology are vast, from enabling accurate voice command recognition for people with disabilities to facilitating multilingual machine translation and content localization for Arabic-speaking markets.
Overall, this research is a significant step in making technology more accessible and inclusive for Arabic speakers.
Google Protects Users from IP Infringement

Google’s Commitment to Protect User IP in AI Applications
Google to protect users in AI copyright accusations
Google has announced its commitment to protecting users of its generative AI systems from allegations of intellectual property infringement. This aligns with similar assurances from other companies like Microsoft and Adobe.
Google’s strategy for intellectual property indemnification has two aspects:
- Firstly, it will protect the training data used in its AI systems, including any copyrighted material involved. While this indemnity is not new, Google clarified that it now extends to copyrighted material scenarios.
- Secondly, Google will also protect users if they face legal action due to the results they obtain using its foundation models. However, this protection only applies if the users do not intentionally infringe upon the rights of others.
Microsoft has also committed to assuming legal responsibility for its Copilot products, while Adobe is dedicated to safeguarding enterprise customers from copyright, privacy, and publicity rights claims when using Firefly.
This commitment from significant tech companies highlights their recognition of the potential legal risks associated with using AI technology and their willingness to provide support and protection for their users.
Learn

Exploring Advanced
Multi-Modal Generative AI
EXPLORING THE ADVANCED MULTI-MODAL GENERATIVE AI
Today, we will dive into Advanced Multi-Modal Generative AI and how it revolutionizes content generation and understanding.
Advanced Multi-Modal Generative AI is a technology that aims to make computers more innovative and capable of generating content in various forms, such as text, images, and sounds. It consists of three key modules: Input, Fusion, and Output.
The Input Module processes data types, including text, images, and audio. It prepares this data for further analysis.
The Fusion Module combines the processed data from different modalities to make sense of the information. It understands relationships between other modalities, such as describing an image based on textual description or generating relevant images from a text format.
The Output Module generates meaningful responses based on the analyzed data. It can generate text, images, and speech.
This technology has several important features:
- Processing Diverse Inputs: Advanced Multi-Modal AI can handle different types of data simultaneously, including text, images, and audio.
- Understanding Relationships: The technology can understand relationships between different modalities.
- Contextual Awareness: The AI is contextually aware and can generate content that is relevant to the given context.
- Training Data Requirements: Multi-modal training data is required for meaningful cross-modal representations.
[READ MORE]
Thank you for reading Hidden AI DAily!

Stay Tuned for Tomorrow’s
Hidden AI Daily
That’s it for today’s issue of Hidden AI Daily! Remember, this newsletter is released daily to keep you up-to-date on the latest in artificial intelligence and machine learning. Your feedback is important to us, so please let us know what you think. We appreciate your input, which helps us improve this newsletter. Stay tuned for tomorrow’s edition!
