![]() This has been followed up by reinforcement learning to enhance the properties further.īesides, the company has released a dataset for helping other research labs train models that are more in line with human preferences. The company has also started to understand the source of pattern-matching behavior in large language models.Īnthropic claims to have developed baseline techniques for steerability and robustness for making large language models more “helpful and harmless”. When it comes to interpretability, the company is said to have made progress in mathematically reverse engineering the behaviour of small language models. It also saw the participation of Caroline Ellison, Nishad Singh, Jaan Tallinn, Jim McClave, and the Center for Emerging Risk Research (CERR).Įstablished in early 2021, Anthropic is said to have carried out research into making AI systems that are more robust, steerable, and interpretable. The funding round was led by FTX CEO Sam Bankman-Fried. (Credit: Gerd Altmann from Pixabay)Īnthropic, a California-based artificial intelligence (AI) safety and research company, has secured $580m in a Series B round. The funding will enable the start-up build large-scale experimental infrastructure for exploring and boosting the safety properties of computationally intensive AI modelsĪI safety and research start-up Anthropic has raised $704m in its Series A and B rounds.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |