CUNY Tech Prep: This week, we explored open-source models with Hugging Face. For my wordle group project, I’m planning to incorporate Meta's NLLB (No Language Left Behind) translation model, which supports over 200 languages. The model uses Torch tensors, essentially multi-dimensional arrays that efficiently process large-scale data, to handle translations smoothly. By integrating NLLB, I can offer translations and personalized hints in real-time, making the game accessible to a diverse audience. Checkout Meta's model here: https://lnkd.in/eCbZJz6x #MachineLearning #DataScience #CTP #HuggingFace #Meta
Ahmad Basyouni’s Post
More Relevant Posts
-
We recently got a huge leverage thanks to meta's llama3.1 large language model on July 24th. The significant difference between llama3 and llama3.1 is the context length. Llama3 has 8000 tokens (approx 7500+ words of English text) and llama3.1 has 128000 tokens of context window which is a huge advantage for anyone to give more fine-tuned instructions, few shot prompts and input data to refine the output. This can significantly improve both accuracy (% of matches with the ground truth responses) and recall (% of classifications that were not generated by the LLM but are supposed to be predicted) of the output texts which can be sentiments/classifications etc., of the input text. #largelanguagemodels #LLM #llama3.1 #llama3 #meta #Amplifai
To view or add a comment, sign in
-
Excited to share some news about Meta's latest large language model, Llama 3! This new model is outperforming previous versions on a variety of benchmarks, including text-based responses, prompt diversity, reasoning ability, and code generation. 🦙 Meta has also created a new dataset to evaluate Llama 3's capabilities. Overall, Llama 3 represents a significant step forward in large language model technology. check out more details: https://lnkd.in/g246W9bN #artificialintelligence #machinelearning # llamalama #meta
To view or add a comment, sign in
-
Bringing efficiency and simplicity to AI infrastructure!
https://lnkd.in/epEkZ4aB Hammerspace at work to enable Meta to create Llama3!
To view or add a comment, sign in
-
Meta is reportedly working on a technology that can translate speech from multiple languages in real time. Turns out, the Babel fish from The Hitchhiker’s Guide to the Galaxy could very well be a real thing!
To view or add a comment, sign in
-
I posted earlier about why it was to Meta's benefit to open-source their Large Language Models and the concept of 'commoditizing the complement'. But missed another critical part of that strategy: the benefit of creating open ecosystems. Here is a masterclass on strategy from Zuck. (various links in comments)
To view or add a comment, sign in
-
-
🎉 Exciting news! 🎉 Meta has just released Llama 3, the next generation of their state-of-the-art open source large language model. But that's not all - Meta is also introducing new trust and safety tools with Llama Guard 2, Code Shield, and CyberSec Eval 2. It's great to see Meta developing Llama 3 in a responsible way. Congratulations on this impressive achievement! #Meta #Llama3 #TrustAndSafety #OpenSource #LanguageModel
To view or add a comment, sign in
-
Meta Llama3 is here: https://lnkd.in/g9VDE2gu. It was eye opening digging into the amount of work that goes into safety aspects: https://lnkd.in/gm-XxA3X
To view or add a comment, sign in
-
For those not aware, Meta just dropped their latest foundational model, Llama 3 in both 8B and 70B format. It is not a Mixture of Experts which blows me a way considering it is a 70B model and based on currently available information, it is outperforming Mixtral 8x22. This gives us an OpenSource model with phenomenal pre-finetuned performance. With one glaring shortcoming.... 8k context. 8k in today's world feels... obsolete. But I can see some decent uses currently. And Meta says they will be releasing longer context versions as well as a whopping 400b multi-modal model in the near future. 400b... Not even an M2 Ultra mac studio with 192g RAM will be able to run that in 4bit quantization. Sounds like its time to get out that 1bit Quantization and get cracking so that we can run it on commodity hardware! Anywho, new foundational models only help drive the community forward, and I expect there will be some 32k context finetunes coming out in the next few weeks, at which point, this model may be a decent coding model.
To view or add a comment, sign in
-
Kate S., Head of Design Operations, Meta, shares her insights on the evolution of the design operations discipline at Meta, her enthusiasm for graphic novels and her perspective on AI. Check out the full interview below, and be sure to visit the blog for more profiles of design leaders at Meta. ➡️ Read the full interview: https://bit.ly/3UUYxxm #LifeAtMeta
To view or add a comment, sign in
-
-
Kate S., Head of Design Operations, Meta, shares her insights on the evolution of the design operations discipline at Meta, her enthusiasm for graphic novels and her perspective on AI. Check out the full interview below, and be sure to visit the blog for more profiles of design leaders at Meta. ➡️ Read the full interview: https://bit.ly/3UUYxxm #LifeAtMeta
To view or add a comment, sign in
-