AI RSS

AI, Synthetic Intelligence -

AI's stunning new skills. To learn AI, visit: https://brilliant.org/digitalengine where you'll also find loads of fun courses on maths, science and computer science. AI robots, with Max Tegmark, Dario Amodei, Emad Mostaque, Tesla bot, Ameca, Digit, Pi AI, GPT-4. Thanks to Brilliant for sponsoring this video. Theory of mind may have spontaneously emerged in large language models. https://www.gsb.stanford.edu/faculty-research/working-papers/theory-mind-may-have-spontaneously-emerged-large-language-models Letter signed by 1500 professors (and thousands of other experts) https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Statement on the risk from the leaders of AI firms: https://www.safe.ai/statement-on-ai-risk 1.5m people take Turing test. https://arxiv.org/abs/2305.20010 RT-2: New model translates vision and language into action https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action?utm_source=twitter&utm_medium=social&utm_campaign=rt2 Embodied AI: Bridging the Gap to Human-Like Cognition https://www.humanbrainproject.eu/en/follow-hbp/news/2023/08/09/embodied-ai-bridging-gap-human-cognition/#:~:text=Our%20brain%20has%20evolved%20through,connection%20to%20the%20physical%20world. AI and robots help understand animal language https://www.scientificamerican.com/article/how-scientists-are-using-ai-to-talk-to-animals/ Karen Bakker: Could an orca give a TED Talk? https://www.ted.com/talks/karen_bakker_could_an_orca_give_a_ted_talk?utm_source=rn-app-share&utm_medium=social&utm_campaign=tedspread Synthesizing Physical Character-Scene Interactions (learning from simulations) https://dl.acm.org/doi/abs/10.1145/3588432.3591525 Smarter people tend to have more advanced moral reasoning skills. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5167721/#:~:text=For%20example%2C%20Derryberry%20et%20al,predictor%20for%20the%20moral%20scores. Why are smarter people more pro-social? https://www.sciencedirect.com/science/article/abs/pii/S0160289618301466 IQ and society https://blogs.scientificamerican.com/beautiful-minds/iq-and-society/#:~:text=IQ%20correlates%20positively%20with%20family,habits%2C%20illness%2C%20and%20morality.

Read more

AI, Synthetic Intelligence -

Get on my daily AI newsletter 🔥 https://natural20.beehiiv.com/subscribe [News, Research and Tutorials on AI] See more at: https://natural20.com/ The Paper: https://arxiv.org/abs/2309.17421 My AI Playlist: https://www.youtube.com/playlist?list=PLb1th0f6y4XROkUAwkYhcHb7OY9yoGGZH [TIMELINE] [00:00] Intro [02:22] Abstract [03:53] Accounting [04:44] Attention to Detail [06:23] Image Recognition Across Domains [08:53] Medical Reasoning [11:23] Making Coffee + Embodied Agents [12:54] Industry, Manufacturing and QA [17:11] Graphical User Interface Navigation [26:24] Understanding Video, Emotions and Aethetics [29:10] Analyzing Dash Cam Footage [30:48] Improving AI Image Prompts [32:42] Visual Poitnting [37:51] Charts, Languages, Memes and Clues. [51:23] Final Points

Read more

AI, David Shapiro -

Medium article: https://medium.com/@dave-shap/become-a-gpt-prompt-maestro-943986a93b81 Slide Deck: https://github.com/daveshap/YouTube_Slide_Decks/blob/main/Business%20and%20Product/LLM%20Prompt%20Taxonomy.pdf Large language models (LLMs) like GPT-4 have shown impressive abilities to generate humanlike text, have conversations, and demonstrate knowledge across many domains. However, there is still confusion around exactly how LLMs work and what capabilities they currently possess. This passage aims to provide a high-level taxonomy of LLM abilities and limitations. LLMs are deep learning neural networks trained on massive text datasets to predict the next word in a sequence. This allows them to build complex statistical representations of language and accumulate world knowledge from their training data. LLMs have no explicit rules or knowledge - their capabilities emerge from recognizing patterns. LLMs excel at reductive operations like summarization, distillation, and extraction which condense large inputs down by identifying salient information. Summarization produces concise overviews of documents. Distillation extracts key facts and principles. Extraction retrieves targeted information like names, dates, or figures. Transformational techniques like paraphrasing, translation, and restructuring reshape text without losing meaning. Paraphrasing rewrites text with different words/phrasing while preserving meaning. Translation converts between languages. Restructuring improves logical flow and readability. Transformations leverage LLMs' understanding of linguistic conventions and narrative flow. Generative tasks like drafting, planning, brainstorming, and amplifying synthesize new content from limited input. Drafting can expand prompts into coherent documents. Planning formulates step-by-step strategies to achieve goals based on parameters. Brainstorming produces creative possibilities from prompts. Amplification adds explanatory details to existing text. Generative abilities are more variable but rapidly improving. Examined through Bloom's Taxonomy, LLMs exhibit skills from basic remembering of facts to highest-level creating original content. Their statistical learning acts as a knowledge repository to query. LLMs also demonstrate strong abilities in understanding concepts, applying knowledge, analyzing passages, and evaluating content. With the right prompting, they can create novel stories, articles, and dialogue. LLMs have vast latent knowledge not contained in their explicit training. This includes memorized facts, general world knowledge, and learned cognitive skills for tasks like translation. Latent knowledge forms a dense reservoir that requires careful probing with prompts and techniques to extract. While promising, reliance on latent knowledge highlights LLMs' need to better index and activate their own internal knowledge. Emergent capabilities like theory of mind, implied cognition, logical reasoning, and in-context learning have arisen from recognizing intricate patterns, not hardcoded rules. Theory of mind suggests models can distinguish their own and others' perspectives. Implied cognition points to dynamic reasoning when generating text. Logical reasoning abilities hint at inferring abstract principles from data. Rapid in-context learning demonstrates knowledge acquisition abilities. Rather than a bug, LLMs' ability to fabricate plausible statements represents a core feature of intelligence. Humans also exhibit a spectrum from creativity to hallucination based on uncontrolled pattern generation. The ideal is not suppressing but responsibly directing generation. Research into alignment and ethics can allow beneficial creativity to flourish while minimizing harms. Maintaining factual grounding and conveying uncertainty are key precautions. In summary, LLMs have diverse capabilities and limitations requiring continued research. With responsible development focused on augmenting human intelligence, LLMs offer exciting potential while managing risks. Their latent knowledge and emergent properties highlight promising directions to elevate reasoning, creativity, and understanding.

Read more

AI, AI Ethics, AI Risk, Daniel Schmachtenberger, Technology -

Daniel Schmachtenberger is a social philosopher and founding member of The Consilience Project, talks about the metacrisis / nukes / Ai / consciousness.

Read more

AGI, AI, Max Tegmark -

Keeping AI under control through mechanistic interpretability The Impact of chatGPT and other large language models on physics research and education (2023) Event organizers: Kevin Burdge, Joshua Borrow, Mark Vogelsberger Session 1: The computer science underlying large language models

Read more

Tags
#WebChat .container iframe{ width: 100%; height: 100vh; }