mastodon.online is one of the many independent Mastodon servers you can use to participate in the fediverse.
A newer server operated by the Mastodon gGmbH non-profit

Server stats:

11K
active users

#claude

72 posts65 participants1 post today
Chuck Darwin<p>US politicians are unable to stand up to Donald Trump, <br>French Senator <a href="https://c.im/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://c.im/tags/Malhuret" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Malhuret</span></a> said, <br>after going viral for a speech in which he described <a href="https://c.im/tags/Trump" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Trump</span></a> as presiding over "Nero's court" <br>and billionaire top adviser Elon <a href="https://c.im/tags/Musk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Musk</span></a> as a "buffoon on ketamine". </p><p>Republicans are afraid of reprisals while Democrats are still reeling from their presidential defeat, Malhuret said</p><p>By calling Donald Trump an “incendiary emperor” <br>and Elon Musk a “buffoon on ketamine,” <br>the 75-year-old senator of French center-right Horizons party has been catapulted overnight into the spotlight <br>thanks to an exceptionally viral video.<br><a href="https://www.france24.com/en/live-news/20250311-us-unable-to-stand-up-to-trump-says-french-senator-after-viral-nero-speech" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">france24.com/en/live-news/2025</span><span class="invisible">0311-us-unable-to-stand-up-to-trump-says-french-senator-after-viral-nero-speech</span></a></p>
Linotype Pilgrim<p>'ai' does not work, and cannot ever do what's claimed for it: <a href="https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533158#" rel="nofollow noopener" target="_blank">dl.acm.org/doi/fullHtml...</a> <a class="hashtag" href="https://bsky.app/search?q=%23aifraud" rel="nofollow noopener" target="_blank">#aifraud</a> <a class="hashtag" href="https://bsky.app/search?q=%23openai" rel="nofollow noopener" target="_blank">#openai</a> <a class="hashtag" href="https://bsky.app/search?q=%23claude" rel="nofollow noopener" target="_blank">#claude</a> <a class="hashtag" href="https://bsky.app/search?q=%23anthropic" rel="nofollow noopener" target="_blank">#anthropic</a> <a class="hashtag" href="https://bsky.app/search?q=%23chatgpt" rel="nofollow noopener" target="_blank">#chatgpt</a><br><br><a href="https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533158#" rel="nofollow noopener" target="_blank">The Fallacy of AI Functionalit...</a></p>
André PolykanineAI disappointment, very unusual for me.
Dining & Cooking<p>Grand County welcomes back Chef Jean-Claude Cavalera <a href="https://www.diningandcooking.com/1987110/grand-county-welcomes-back-chef-jean-claude-cavalera/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">diningandcooking.com/1987110/g</span><span class="invisible">rand-county-welcomes-back-chef-jean-claude-cavalera/</span></a> <a href="https://vive.im/tags/appetizers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>appetizers</span></a> <a href="https://vive.im/tags/CAVALERA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CAVALERA</span></a> <a href="https://vive.im/tags/chef" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chef</span></a> <a href="https://vive.im/tags/CLAUDE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CLAUDE</span></a> <a href="https://vive.im/tags/county" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>county</span></a> <a href="https://vive.im/tags/cuisine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuisine</span></a> <a href="https://vive.im/tags/francais" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>francais</span></a> <a href="https://vive.im/tags/france" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>france</span></a> <a href="https://vive.im/tags/French" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>French</span></a> <a href="https://vive.im/tags/FrenchAppetizers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FrenchAppetizers</span></a> <a href="https://vive.im/tags/Grand" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Grand</span></a> <a href="https://vive.im/tags/grill" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>grill</span></a> <a href="https://vive.im/tags/restaurant" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>restaurant</span></a> <a href="https://vive.im/tags/stillwater" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>stillwater</span></a></p>
Wulfy<p>I feel a little bit guilty for making <a href="https://infosec.exchange/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> Sonnet (3.7) model "best for coding"</p><p>After some of the shit code exercises I put it's precursor, Opus (3.5) through...</p><p>...no raindrops feels responsible for the flood. Right?</p><p>But in this case...<a href="https://infosec.exchange/tags/meaculpa" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>meaculpa</span></a></p>
Wulfy<p><span class="h-card" translate="no"><a href="https://threads.net/@firerock31/" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>firerock31</span></a></span> </p><p>I'm using <a href="https://infosec.exchange/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> for coding.<br>I would not dare use <a href="https://infosec.exchange/tags/xAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>xAI</span></a> least it disappears all the lowercase letters and the Arabic numerals.</p><p>Do you have any idea how awkward Roman only maths is?</p>
Victoria Stuart 🇨🇦 🏳️‍⚧️<p>On the Biology of a Large Language Model [Claude LLM]<br><a href="https://transformer-circuits.pub/2025/attribution-graphs/biology.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">transformer-circuits.pub/2025/</span><span class="invisible">attribution-graphs/biology.html</span></a><br>Anthropic: On the Biology of a Large Language Model : MachineLearning<br><a href="https://old.reddit.com/r/MachineLearning/comments/1jmhoq6/r_anthropic_on_the_biology_of_a_large_language" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">old.reddit.com/r/MachineLearni</span><span class="invisible">ng/comments/1jmhoq6/r_anthropic_on_the_biology_of_a_large_language</span></a></p><p><a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a> <a href="https://mastodon.social/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://mastodon.social/tags/ChainOfThought" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChainOfThought</span></a> <a href="https://mastodon.social/tags/reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reasoning</span></a></p>
Annie<p>"To date, the pages themselves have hardly been disseminated on social networks. However, they end up in the index of search engines en masse, poisoning the data records accessed by language models such as <a href="https://ruhr.social/tags/ChatGpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGpt</span></a>, <a href="https://ruhr.social/tags/Gemini" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gemini</span></a> or <a href="https://ruhr.social/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> with their lies."</p><p>The infection of western AI chat bots by a Russian propaganda network/Newsguard</p><p><a href="https://ruhr.social/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://ruhr.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://ruhr.social/tags/KI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KI</span></a> <a href="https://ruhr.social/tags/RussianPropaganda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RussianPropaganda</span></a> <a href="https://ruhr.social/tags/Russia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Russia</span></a> </p><p>March2025PravdaAIMisinformationMonitor.pdf</p>
Victoria Stuart 🇨🇦 🏳️‍⚧️<p>/1 That post includes the following video - which in simple language / examples a basic overview of a current, reasoning LLM (thought; Claude) and "prompt engineering."</p><p>Tracing the thoughts of a large language model<br><a href="https://www.youtube.com/watch?v=Bj9BD2D3DzA" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">youtube.com/watch?v=Bj9BD2D3DzA</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a> <a href="https://mastodon.social/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://mastodon.social/tags/ChainOfThought" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChainOfThought</span></a> <a href="https://mastodon.social/tags/reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reasoning</span></a></p>
Miguel Afonso Caetano<p>"Why do language models sometimes hallucinate—that is, make up information? At a basic level, language model training incentivizes hallucination: models are always supposed to give a guess for the next word. Viewed this way, the major challenge is how to get models to not hallucinate. Models like Claude have relatively successful (though imperfect) anti-hallucination training; they will often refuse to answer a question if they don’t know the answer, rather than speculate. We wanted to understand how this works.</p><p>It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit (see also this recent paper for related findings). This allows Claude to answer the question when it knows the answer. In contrast, when asked about an unknown entity ("Michael Batkin"), it declines to answer.</p><p>Sometimes, this sort of “misfire” of the “known answer” circuit happens naturally, without us intervening, resulting in a hallucination. In our paper, we show that such misfires can occur when Claude recognizes a name but doesn't know anything else about that person. In cases like this, the “known entity” feature might still activate, and then suppress the default "don't know" feature—in this case incorrectly. Once the model has decided that it needs to answer the question, it proceeds to confabulate: to generate a plausible—but unfortunately untrue—response."</p><p><a href="https://www.anthropic.com/research/tracing-thoughts-language-model" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">anthropic.com/research/tracing</span><span class="invisible">-thoughts-language-model</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a> <a href="https://tldr.nettime.org/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hallucinations</span></a></p>
KINEWS24<p>​Claude 3.7 Sonnet: KI mit 500k Kontextfenster​</p><p>Revolutionäre Kontextgröße​<br>Verbesserte Verarbeitung​<br>Neue Möglichkeiten​</p><p><a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/ki" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ki</span></a> <a href="https://mastodon.social/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>artificialintelligence</span></a> <a href="https://mastodon.social/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://mastodon.social/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a></p><p>Jetzt lesen und folgen!</p><p><a href="https://kinews24.de/claude-3-7-sonnet-bald-mit-500k-contextfenster/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">kinews24.de/claude-3-7-sonnet-</span><span class="invisible">bald-mit-500k-contextfenster/</span></a></p>
:rss: Qiita - 人気の記事<p>【翻訳】Model Context Protocol (MCP) and Amazon Bedrock<br><a href="https://qiita.com/kazuneet/items/479c7d31b31e411acbb4?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">qiita.com/kazuneet/items/479c7</span><span class="invisible">d31b31e411acbb4?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items</span></a></p><p><a href="https://rss-mstdn.studiofreesia.com/tags/qiita" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>qiita</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/MCP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MCP</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/bedrock" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bedrock</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claude</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/MCP%E3%82%B5%E3%83%BC%E3%83%90%E3%83%BC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MCPサーバー</span></a></p>
Wulfy<p>Had a super-brief session with me mate <a href="https://infosec.exchange/tags/claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claude</span></a> on the latest research on superradiance and Kurian calculations on new upper limits on how much computing all life on Earth has done throughout history...</p><p>...and suddenly I used up all the compute for 4 hours...</p><p>Seems like there are certain topics that burn far more compute than others...</p><p>...NO DON'T CALCULATE THE UPPER COMPUTATIONAL LIMIT YOU SILLY AI 😁</p>
PUPUWEB Blog<p>Anthropic researchers reveal surprising insights from observing Claude's thought process: planning ahead, confusion between safety &amp; helpfulness goals, lying, and more. 🤖 <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a> <a href="https://mastodon.social/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/TechNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechNews</span></a> <a href="https://mastodon.social/tags/AIResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIResearch</span></a></p>
Ace Fujiwara<p>absolutely fascinating the logical clarity that Claude AI can code given that the input prompt is just as directed and structured.</p><p><a href="https://github.com/anthropics/claude-code" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/anthropics/claude-c</span><span class="invisible">ode</span></a></p><p><a href="https://mstdn.social/tags/claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claude</span></a> <a href="https://mstdn.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mstdn.social/tags/generativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>generativeAI</span></a> <a href="https://mstdn.social/tags/innovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>innovation</span></a> <a href="https://mstdn.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mstdn.social/tags/finance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finance</span></a></p>
Marcus Adams<p>Just wrote a post on <a href="https://mastodon.social/tags/Substack" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Substack</span></a> about the <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> crap that has been going on this past week.</p><p>Link: <a href="https://gerowen.substack.com/p/the-ai-data-scraping-is-getting-out" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">gerowen.substack.com/p/the-ai-</span><span class="invisible">data-scraping-is-getting-out</span></a></p><p><a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> <a href="https://mastodon.social/tags/Meta" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Meta</span></a> <a href="https://mastodon.social/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://mastodon.social/tags/Networking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Networking</span></a> <a href="https://mastodon.social/tags/Bytedance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bytedance</span></a></p>
Wulfy<p><span class="h-card" translate="no"><a href="https://mastodon.social/@talin" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>talin</span></a></span> <span class="h-card" translate="no"><a href="https://wandering.shop/@cstross" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cstross</span></a></span> </p><p>First, Kudos to you exploring.<br>Too many smart folks poke the <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> with a stick and then join the "AI Slop" <a href="https://infosec.exchange/tags/luddite" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>luddite</span></a> chorus.</p><p>There is an LLM setting called "Temperature" on public models, it's set low. Result is pedestrian writing.<br>Some API calls allow you to mess with it. But you'll need a huggingface personal <br><a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> model to play with it.</p><p>Think of it as Ernest Hemingway's bottle of whisky. The higher the temperature, the higher the creativity.</p><p>I'll refer you to me mate <a href="https://infosec.exchange/tags/claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claude</span></a> for more;</p><p>Temperature Parameter in LLMs</p><p>## Basic Concept<br>- Controls the probability distribution of token selection<br>- Low temperature (close to 0): More deterministic, focused responses<br>- High temperature (closer to 1 or higher): More random, creative responses</p><p>## Practical Effects<br>- Low temperature (0.1-0.3):<br> - More conservative, precise answers<br> - Tends to choose most likely tokens<br> - Reduces hallucination<br> - More factual and consistent</p><p>- High temperature (0.7-1.0):<br> - More imaginative and diverse outputs<br> - Increases likelihood of unexpected/creative responses<br> - More prone to potential inaccuracies<br> - Can generate more varied text</p>
L'Ourson :blobbear:-:flag_rainbow:ESC
BGDoncaster<p>Whoa! LOTS to unpack here. Weekend Reading! </p><p>Anthropic reveals research how AI systems process information and make decisions. AI models can perform a chain of reasoning, can plan ahead, and sometimes work backward from a desired outcome. The research also provides insight into why language models hallucinate. </p><p>Interpretation techniques called “circuit tracing” and “attribution graphs” enable researchers to map out the specific pathways of neuron-like features that activate when models perform tasks. See the links below for details. </p><p>Summary Article: <a href="https://venturebeat.com/ai/anthropic-scientists-expose-how-ai-actually-thinks-and-discover-it-secretly-plans-ahead-and-sometimes-lies/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">venturebeat.com/ai/anthropic-s</span><span class="invisible">cientists-expose-how-ai-actually-thinks-and-discover-it-secretly-plans-ahead-and-sometimes-lies/</span></a> </p><p>Circuit Tracing: <a href="https://transformer-circuits.pub/2025/attribution-graphs/methods.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">transformer-circuits.pub/2025/</span><span class="invisible">attribution-graphs/methods.html</span></a> </p><p>Research Overview: <a href="https://transformer-circuits.pub/2025/attribution-graphs/biology.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">transformer-circuits.pub/2025/</span><span class="invisible">attribution-graphs/biology.html</span></a> <a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techhub.social/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a> <a href="https://techhub.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://techhub.social/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://techhub.social/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://techhub.social/tags/CircuitTracing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CircuitTracing</span></a> <a href="https://techhub.social/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a></p>
Ryan Daws 🤓<p>Anthropic provides insights into the ‘AI biology’ of Claude <a href="https://www.artificialintelligence-news.com/news/anthropic-provides-insights-ai-biology-of-claude/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">artificialintelligence-news.co</span><span class="invisible">m/news/anthropic-provides-insights-ai-biology-of-claude/</span></a> <a href="https://techhub.social/tags/anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>anthropic</span></a> <a href="https://techhub.social/tags/claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claude</span></a> <a href="https://techhub.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://techhub.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://techhub.social/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://techhub.social/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a> <a href="https://techhub.social/tags/technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technology</span></a></p>