Welcome to the October 23, 2023, edition of Hidden AI News Daily, where we delve into the latest developments, breakthroughs, and trends shaping the world of artificial intelligence.
In today’s newsletter, we explore the launch of a new generative AI model, the application of AI in recruitment, and how top AI companies fare in the transparency test. We also dive into some of the intriguing ways AI is being used to preserve memories and reconstruct 3D models – all this and more, to keep you updated on the dynamic universe of AI. Let’s dive in!
Fuyu-8B Generative AI ModeL IS Simple, Fast, and Powerful

- Fuyu-8B is a multimodal foundation model released by Adept AI, a generative AI platform.
- It is a streamlined version of the model powering the Adept AI platform.
- Fuyu-8B is tailored for digital agent scenarios and excels in answering questions from graphs or understanding concepts across varying image resolutions.
- It has a simple architecture without a specialized image encoder, which leads to performance enhancements.
- Fuyu-8B is being actively refined for powering digital agents, representing a trend in generative AI.
Trustworthiness Assessment of GPT Models by Microsoft Research
Microsoft Research published an assessment of trustworthiness in GPT models. They evaluated vectors such as toxicity, privacy, and adversarial robustness. This research addresses concerns regarding large language models’ potential biases and ethical implications.
Meta AI’s Paper on Reconstructing Images from Brain Activity

Meta AI published a paper on an AI architecture that can reconstruct images from brain activity. This breakthrough could have significant implications for neuroimaging research and potentially enable communication with individuals unable to speak or move due to neurological conditions.
New Calibration Method for In-context Learning in LLMs by Google Research
Google Research introduced a new calibration method for in-context learning in large language models (LLMs). This method aims to improve the reliability of LLMs when generating responses by considering both context-specific information and global knowledge.
Social and Ethical Risks of AI Systems Explored by Google DeepMind
Google DeepMind published a paper discussing the social and ethical risks of artificial intelligence systems. The paper highlights challenges like fairness, transparency, accountability, and human-AI collaboration. Understanding these risks is crucial for the responsible development and deployment of AI technologies.
NVIDIA Open Sources TensorRT-LLM to Accelerate Performance
NVIDIA open-sourced TensorRT-LLM, a framework designed to accelerate the performance of large language models (LLMs) on NVIDIA GPUs. This framework provides optimizations and tools to enhance the efficiency of LLM inference, enabling faster and more efficient natural language processing tasks.
ML Algorithms for Personalized Recipe Recommendations in The New York Times
The New York Times discusses their platform’s machine learning algorithms for personalized recipe recommendations. These algorithms analyze user preferences and behavior to provide tailored recipe suggestions, enhancing the overall user experience.
Pinterest’s Architecture for Anomaly Detection
Pinterest shares insights into its architecture, allowing it to plugin different anomaly detection algorithms into its platform. This flexible architecture enables continuous monitoring and detection of abnormal patterns or behaviors across various systems, ensuring a secure and reliable user experience.
Chinese AI Startup Zhipu Raises $340 Million in Funding
Chinese AI startup Zhipu raised $340 million from investors, including Tencent and Alibaba. This funding will support further research and development efforts and expansion plans for Zhipu in the rapidly growing AI market.
NVIDIA’s AI Revolutionizes 3D Reconstruction: 20% Faster Game Graphics
NVIDIA, the leading graphics processing unit (GPU) manufacturer, has recently announced a new technique that promises to revolutionize game graphics. The company’s research paper introduces a novel approach to reconstructing 3D objects from images, which leads to better generative 3D modeling, improved physics simulations, and more accurate recommendations for 3D printing.
Previous techniques, such as Marching Cubes and Neural Dual Contouring, have been used for similar purposes but have certain limitations. However, NVIDIA’s new technique offers perfect reconstruction with greater flexibility. This advancement opens possibilities for enhanced physics simulations and realistic animations in video games.
One interesting aspect of this original approach is its combination with text-to-3D techniques. By integrating text information into generating geometry, the resulting models achieve high-quality visualization using 20% fewer triangles than traditional methods. Additionally, these models require less memory and can be rendered faster.
Marching Cubes, one of the previous techniques mentioned earlier, has faced challenges with 3D printing applications. However, NVIDIA’s new method overcomes these issues by providing more suitable assets for 3D printing. Although not yet ready for use in AAA games due to certain limitations, this promising development paves the way for future advancements in-game graphics.
In summary:
- NVIDIA has developed a new technique that reconstructs 3D objects from images.
- This technique improves generative 3D modeling, physics simulation accuracy, and recommendations for 3D printing.
- Previous techniques like Marching Cubes and Neural Dual Contouring had limitations.
- The new technique offers perfect reconstruction with greater flexibility.
- It can be combined with text-to-3D techniques for high-quality geometry visualization.
- Models generated using this method use fewer triangles and render faster than traditional approaches.
- The new technique also overcomes issues with 3D printing.
- While not yet suitable for AAA games, this development shows promise for future graphics advancements.
AI in Recruitment: Changing Hiring

Artificial Intelligence (AI) in recruitment has been gaining popularity in Australia. AI technology is utilized for resume screening and preliminary interviews to streamline the hiring process, save time, and assess candidates more effectively. While this advancement may seem promising, concerns exist regarding its impact on job seekers and potential biases.
1. High-Risk Activity: Some researchers view using AI in recruitment as a high-risk activity. The lack of transparency in the AI screening process can lead to indirect discrimination against certain groups of job seekers. Biases have been observed, such as favoring male resumes and disadvantaging candidates with non-western names.
2. Jobseeker Frustration: The job search can be stressful, disheartening, and draining for job seekers. This process’s prolonged and repetitive nature can result in job search depression, leading to frustration and resentment towards the AI-powered hiring process.
3. Impersonal Interviews: Some jobseekers have experienced AI-driven video interviews and timed tests, which they find impersonal compared to face-to-face interviews. These automated processes lack the personalized touch of human interaction during initial interviews.
4. Need for a Human-Centered Approach: The article highlights the need for a more personalized and human-centered approach in recruitment. Initial interviews are important for job seekers and employers to understand each other better and evaluate if a good fit exists between them.
5. Avoiding Biases: The effectiveness of AI technology in hiring depends on avoiding biases and unfairness in programming. Employers should ensure that their algorithms are designed to fairly assess all candidates based on relevant criteria without favoring specific demographics or backgrounds.
6. Adhering to Employer Expectations: Jobseekers should adhere to employer expectations when applying for a position. It is important to carefully read job descriptions and tailor applications accordingly, highlighting relevant skills and experiences that align with the role.
7. Online Presence: Jobseekers should be cautious of their online presence and avoid anything that could raise questions about their integrity or behavior. Employers may use social media platforms to gather additional information about candidates, so it is crucial to maintain a professional image online.
Top AI Companies Fail Transparency Test

In July and September, 15 of the biggest AI companies signed on to the White House’s voluntary commitments to manage the risks posed by AI. One of the key aspects of managing these risks is transparency in AI systems. Transparency means sharing information across the industry with governments, civil society, and academia and publicly reporting AI systems’ capabilities and limitations.
Stanford researchers created a transparency index to assess the transparency of AI models developed by these companies. This index includes one hundred metrics covering various aspects such as training data disclosure, model properties, function, distribution, and use. The highest possible score on this index is fifty-four out of one hundred.
Unfortunately, most companies scored poorly in terms of disclosing their training data. OpenAI’s decision to withhold information about its GPT-4 model’s architecture, size, and training process was cited as an example of a lack of transparency. On the other hand, Hugging Face’s Bloomz model received the highest score in this category.
Another area where companies lacked transparency was providing information about the labor involved in refining their models. Companies did not disclose details about the workers involved or their wages.
However, it wasn’t all negative news regarding transparency in AI development. According to this index, hugging Face, Meta (formerly Facebook), and Stability AI were considered the most transparent developers. These companies have open models leading to sharing information with stakeholders.
The Stanford researchers behind this index plan to update it annually. They hope policymakers will use this tool to inform future AI legislation efforts. The goal is to highlight areas where intervention may be necessary and to encourage companies to improve their transparency practices.
It is important to note that while transparency is a crucial aspect of ethical practices in AI development, it does not guarantee ethical behavior across all aspects of a model’s creation or use. A model could still earn points for transparency even if it were trained on copyrighted material or involved underpaid workers. However, the transparency index serves as a first step towards addressing transparency and ethical issues in AI.
AI vs Humans: Who’s More Creative?
In a recent article from The Guardian, Stefan Stern discusses whether artificial intelligence (AI) is more creative than the human brain. He expresses his doubts and emphasizes the importance of humans staying in charge.
One example that highlights AI’s potential for creativity is ChatGPT, an AI model developed by OpenAI. In a study conducted by researchers at Stanford University, ChatGPT was found to be more creative than MBA students in generating new product ideas. However, the product ideas generated by AI were not particularly innovative.
This raises concerns about using AI to provide answers for online consumer surveys. While AI can offer speed and efficiency in generating ideas, there is a question of whether it can deliver novel and groundbreaking concepts. The concept of novelty in new product creation is highly debated, with some arguing that true innovation requires human intuition and an understanding of societal needs.
Stern suggests that instead of replacing humans with AI in creative tasks, it could be used as a “creative co-pilot.” This means that while AI can assist in generating initial drafts or providing inspiration, it should never replace the unique perspective and artistic sensibilities that humans bring to the table.
The recent Writers Guild of America strike in Hollywood is another example of why humans should remain at the helm of creativity. During this strike, writers demanded proper recognition and credit for their work amidst the growing use of algorithms and automated content generation. While AI can generate a first draft or assist in content creation, human writers bring depth, emotion, and firsthand experiences to their work.
To further illustrate the potential of human creativity, Stern mentions iconic cultural landmarks like the Vatican galleries and the Sistine Chapel. These works of art are a testament to what humans can create when given free rein over their imagination and talents.
In conclusion, while AI may have its merits in generating ideas or assisting in certain aspects of the creative process, humans should remain in charge. True creativity requires the human touch – the ability to empathize, connect with emotions, and think beyond existing boundaries. AI may be a tool, but it can never replace the unique capabilities of the human brain.
AI Brings Back Deceased Loved Ones
Artificial intelligence (AI) has become increasingly advanced in recent years, with applications ranging from autonomous vehicles to voice-activated personal assistants. But now, some individuals uniquely use AI to bring back their deceased loved ones.
One such individual is James Vlahos, who created a chatbot called Dadbot to remember his father after he passed away. Dadbot was designed to mimic Vlahos’ father’s speech patterns and personality, allowing him to converse with an AI version of his dad. This experience inspired Vlahos to market a similar technology in his app, Hereafter.ai.
Dhilon Solanki is another believer in the power of AI to preserve legacies after death. He envisions a future where people can upload their memories and experiences into an AI system that generates personalized content long after they’re gone. Solanki believes that this digital afterlife could serve as a way for future generations to learn from the wisdom and experiences of those who came before them.
The demand for a digital afterlife may increase even further as advancements in longevity and anti-aging techniques continue. The desire to leave a legacy becomes even stronger with more people living longer. And what better way to do so than through AI?
One potential application of AI in preserving legacies is the creation of 3D holographic avatars of deceased family members. These avatars could be programmed with information about their personalities, interests, and life stories, allowing future generations to interact with them as if they were still alive.
However, using AI to bring back loved ones raises many ethical questions and concerns. Some argue that it could lead to emotional manipulation or exploitation by companies seeking profit from grieving individuals. Additionally, there are concerns about privacy and consent when using someone’s likeness or personal information in AI systems.
Despite these concerns, using AI to preserve legacies and bring back loved ones is an intriguing concept that holds both promise and controversy. It remains to be seen how society will navigate this new frontier of AI technology and its implications for the afterlife.
Thank you for Reading Hidden AI News Daily!

Thank you for joining us today for Hidden AI News Daily! We hope you have enjoyed the latest updates on AI and technology. Remember, this newsletter is released daily to keep you informed and inspired.
We would love to hear your feedback on this issue! Your input is invaluable in making this newsletter even better. Please provide your thoughts and suggestions directly in the mobile app.
Stay tuned for more exciting news and advancements in artificial intelligence. Have a wonderful day!
