mastodon.online is one of the many independent Mastodon servers you can use to participate in the fediverse.
A newer server operated by the Mastodon gGmbH non-profit

Server stats:

11K
active users

#embedding

0 posts0 participants0 posts today
Hacker News<p>SOTA Code Retrieval with Efficient Code Embedding Models — <a href="https://www.qodo.ai/blog/qodo-embed-1-code-embedding-code-retreival/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">qodo.ai/blog/qodo-embed-1-code</span><span class="invisible">-embedding-code-retreival/</span></a><br><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/SOTA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SOTA</span></a> <a href="https://mastodon.social/tags/Code" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Code</span></a> <a href="https://mastodon.social/tags/Retrieval" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Retrieval</span></a> <a href="https://mastodon.social/tags/Code" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Code</span></a> <a href="https://mastodon.social/tags/Embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embedding</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technology</span></a> <a href="https://mastodon.social/tags/Machine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Machine</span></a> <a href="https://mastodon.social/tags/Learning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Learning</span></a></p>
Alejandro Duarte<p><a href="https://mastodon.online/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> and <a href="https://mastodon.online/tags/RAG" class="mention hashtag" rel="tag">#<span>RAG</span></a> - Learning the basics: What exactly is an <a href="https://mastodon.online/tags/embedding" class="mention hashtag" rel="tag">#<span>embedding</span></a> and how to use them in <a href="https://mastodon.online/tags/MariaDB" class="mention hashtag" rel="tag">#<span>MariaDB</span></a>?<br /><a href="https://www.youtube.com/watch?v=XkB2DLK60JU" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="">youtube.com/watch?v=XkB2DLK60JU</span><span class="invisible"></span></a></p>
linkdrop<p>GitHub - lancedb/lancedb: Developer-friendly, serverless vector database for AI applications. Easily add long-term memory to your LLM apps! <a href="https://github.com/lancedb/lancedb" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/lancedb/lancedb</span><span class="invisible"></span></a> <a href="https://troet.cafe/tags/persistence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>persistence</span></a> <a href="https://troet.cafe/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://troet.cafe/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> <a href="https://troet.cafe/tags/database" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>database</span></a> <a href="https://troet.cafe/tags/GitHub" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GitHub</span></a> <a href="https://troet.cafe/tags/search" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>search</span></a> <a href="https://troet.cafe/tags/vector" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vector</span></a> <a href="https://troet.cafe/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a></p>
:rss: Qiita - 人気の記事<p>💟🎉初めてのAI開発!ワクワクしながら作った問い合わせ対応チャットボット🎉💟<br><a href="https://qiita.com/SatoRyota_zvc/items/c5d647f5174ca8136bcb?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">qiita.com/SatoRyota_zvc/items/</span><span class="invisible">c5d647f5174ca8136bcb?utm_campaign=popular_items&amp;utm_medium=feed&amp;utm_source=popular_items</span></a></p><p><a href="https://rss-mstdn.studiofreesia.com/tags/qiita" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>qiita</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/chatbot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatbot</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a></p>
LinoTadros<p>Video: Using an external Azure AI Search Vector store in Azure AI Foundry Prompt Flow.<br><a href="https://youtu.be/v3hcfY1oe_k?si=GlAApFy1rD7sz3nj" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">youtu.be/v3hcfY1oe_k?si=GlAApF</span><span class="invisible">y1rD7sz3nj</span></a><br>@thetrainingboss <a href="https://mastodon.social/tags/azureaifoundry" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>azureaifoundry</span></a> <a href="https://mastodon.social/tags/azureaisearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>azureaisearch</span></a> <a href="https://mastodon.social/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> <a href="https://mastodon.social/tags/vectorstore" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vectorstore</span></a> <a href="https://mastodon.social/tags/promptflow" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>promptflow</span></a> <a href="https://mastodon.social/tags/lookup" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lookup</span></a></p>
Piotr Migdał<p>I’m excited to share my newest blog post, "Don't sure cosine similarity carelessly"</p><p><a href="https://p.migdal.pl/blog/2025/01/dont-use-cosine-similarity" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">p.migdal.pl/blog/2025/01/dont-</span><span class="invisible">use-cosine-similarity</span></a></p><p>We often rely on cosine similarity to compare embeddings—it's like “duct tape” for vector comparisons. But just like duct tape, it can quietly mask deeper problems. Sometimes, embeddings pick up a “wrong kind” of similarity, matching questions to questions instead of questions to answers or getting thrown off by formatting quirks and typos rather than the text's real meaning.</p><p>In my post, I discuss what can go wrong with off-the-shelf cosine similarity and share practical alternatives. If you’ve ever wondered why your retrieval system returns oddly matched items or how to refine your embeddings for more meaningful results, this is for you!<br>`<br>I want to thank Max Salamonowicz and Grzegorz Kossakowski for their feedback after my flash talk at the Warsaw AI Breakfast, Rafał Małanij for inviting me to give a talk at the Python Summit, and for all the curious questions at the conference, and LinkedIn. </p><p><a href="https://mathstodon.xyz/tags/cosineSimilarity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cosineSimilarity</span></a> <a href="https://mathstodon.xyz/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> <a href="https://mathstodon.xyz/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mathstodon.xyz/tags/similarity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>similarity</span></a></p>
Ricardo<p>Damn, this is really cool, but I wish it had a big “pre-requisites” in the readme with “NVIDIA” in it <a href="https://mstdn.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mstdn.social/tags/RAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RAG</span></a> <a href="https://mstdn.social/tags/Embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embedding</span></a> <a href="https://mstdn.social/tags/Documents" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Documents</span></a> <a href="https://mstdn.social/tags/Ollama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ollama</span></a> <a href="https://github.com/TilmanGriesel/chipper" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/TilmanGriesel/chipp</span><span class="invisible">er</span></a></p>
Ben Lorica 罗瑞卡<p>🆕 Encoder only model that's a direct drop-in replacement for existing BERT models<br>- First major upgrade to BERT-style models in six years<br>- Significantly reduced processing costs for large-scale applications<br>- Enables longer document processing without chunking<br>- Better performance in retrieval tasks<br>- Suitable for consumer-grade GPU deployment<br><a href="https://indieweb.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://indieweb.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://indieweb.social/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a><br><a href="https://huggingface.co/blog/modernbert" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">huggingface.co/blog/modernbert</span><span class="invisible"></span></a></p>
LinoTadros<p>Coaching 2 workshops this week on AI Design Wins using <a href="https://mastodon.social/tags/CosmosDB" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CosmosDB</span></a> for <a href="https://mastodon.social/tags/Embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embedding</span></a> and <a href="https://mastodon.social/tags/Vectorization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Vectorization</span></a> of content in text and audio format. Lots of fun and will record videos to publish for all on YouTube soon. @soliancenet @thetrainingboss</p>
Some Bits: Nelson's Linkblog<p>SQLite's Use Of Tcl (2017): I had no idea the database was originally written to be used as a TCL extension. Explains a lot of good things.<br><a href="https://www.tcl.tk/community/tcl2017/assets/talk93/Paper.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">tcl.tk/community/tcl2017/asset</span><span class="invisible">s/talk93/Paper.html</span></a><br> <a href="https://tech.lgbt/tags/via" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>via</span></a>:lobsters <a href="https://tech.lgbt/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a> <a href="https://tech.lgbt/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> <a href="https://tech.lgbt/tags/sqlite" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sqlite</span></a> <a href="https://tech.lgbt/tags/tcl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tcl</span></a> #+</p>
The New Stack<p>Fine-tuning <a href="https://hachyderm.io/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> models clarifies enterprise semantics, business metrics, and ranking relevance prior to users issuing prompts.</p><p><a href="https://thenewstack.io/the-secret-sauce-for-vector-search-training-embedding-models/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thenewstack.io/the-secret-sauc</span><span class="invisible">e-for-vector-search-training-embedding-models/</span></a></p>
Ian K Tindale<p><span>The current relevation that LLMs can’t reason is causing a lot of shade&amp;fraud, but it’s not purely true<br><br>An LLM could reason, if you gave it a corpus of sentences (in whichever languages) which explicitly and unambiguously described a whole big bag of causal relationships and outcomes and things that happen because other things happen, and general structures such as that described clearly and formally and without any possibility of confusion<br><br>The embeddings which result from such a corpus could well work as a reference source of logic or cause or common sense or reason, about lots of things, and the next step would be to make it so that these embeddings are generalisable so that the common sense of the way life is, can be applied widely (again using vector comparison) so that yes it is possible to apply reason to a LLM, the main thing is that there probably isn’t an emphasis on that kind of descriptive and even prescriptive literature in and among the source learning in the first place –&nbsp;there’ll be a lot, there’ll be some, but I don’t think it was emphasised<br><br>By introducing it at the RAG level, and then the embeddings migrating back into the future models, I believe it could be possible to emulate a lot of common sense about the world and the way things are, purely through description of such – after all, the embeddings produced from such a block (a very massive block) of description, as vectors, are only numbers, which is what LLMs are really operating on, just vectors, not words, not tokens, just numbers<br><br>Consequently my dreams of applying real-world sensor/actuator ways of learning about the real world and building common sense are probably able to be supplanted just by a rigorous and hefty major project of just describing it instead of actually doing it –&nbsp;but the thing to watch would be in the description itself, it’d have to be as detailed and accurate and wide-ranging as the experiential model would be, and this might be where the difficulty lies, people describing common sense in the world would tend to abbreviate, generalise prematurely, miss things out, misunderstand, and above all, they’ll assume a lot </span><a href="https://toot.pikopublish.ing/tags/AI" rel="nofollow noopener" target="_blank">#AI</a><span> </span><a href="https://toot.pikopublish.ing/tags/LLM" rel="nofollow noopener" target="_blank">#LLM</a><span> </span><a href="https://toot.pikopublish.ing/tags/reasoning" rel="nofollow noopener" target="_blank">#reasoning</a><span> </span><a href="https://toot.pikopublish.ing/tags/CommonSense" rel="nofollow noopener" target="_blank">#CommonSense</a><span> </span><a href="https://toot.pikopublish.ing/tags/vector" rel="nofollow noopener" target="_blank">#vector</a><span> </span><a href="https://toot.pikopublish.ing/tags/embedding" rel="nofollow noopener" target="_blank">#embedding</a></p>
Miha Kosmac<p>An interesting bioRxiv preprint was shared on the 🐦 site (<a href="https://x.com/strnr/status/1844105666962579813" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">x.com/strnr/status/18441056669</span><span class="invisible">62579813</span></a>). The paper describes a model to represent cells from large scale scRNA seq atlases using LLMs. Apart from the novelty value one of the main draws should be the ability to map any dataset with no additional data labelling, model training or fine-tuning onto the existing universal cell embedding. <a href="https://www.biorxiv.org/content/10.1101/2023.11.28.568918v2" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">biorxiv.org/content/10.1101/20</span><span class="invisible">23.11.28.568918v2</span></a><br><a href="https://github.com/snap-stanford/UCE" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/snap-stanford/UCE</span><span class="invisible"></span></a><br><a href="https://mastodonapp.uk/tags/scRNAseq" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>scRNAseq</span></a> <a href="https://mastodonapp.uk/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> <a href="https://mastodonapp.uk/tags/biology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biology</span></a> <a href="https://mastodonapp.uk/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a></p>
Rost Glukhov<p>How to rerank documents with Embedding models &amp; similarity calculation in RAG:</p><p><a href="https://www.glukhov.org/post/2024/09/reranking-with-embedding-models" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">glukhov.org/post/2024/09/reran</span><span class="invisible">king-with-embedding-models</span></a></p><p><a href="https://techhub.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techhub.social/tags/Ollama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ollama</span></a> <a href="https://techhub.social/tags/RAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RAG</span></a> <a href="https://techhub.social/tags/Embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embedding</span></a></p>
Habr<p>Распределённый инференс llama.cpp через RPC</p><p>Приветствую, хабровчане! Идея создания данной публикации крутилась с моей голове уже давно, дело в том, что одно из моих хобби связанно с распределёнными вычислениями, а другое хобби связанно с нейросетями и мне давно не давала покоя идея запустить инференс LLM на нескольких компьютерах, но так чтобы все они выполняли работу над одно и той же моделью параллельно. Погуглив некоторое время узнал, что проект LocalAI уже относительно давно поддерживает такую возможность, недолго думая я раскатал на нескольких компьютерах данный проект, после чего выполнил все необходимые настройки связав все инстансы в единую систему и, мягко говоря, был разочарован, уж слишком "фатально-недостаточным" оказалось данное решение, Docker-образ собран неоптимально, он был огромный по весу и только под amd64 , неотключаемый веб-интерфейс шел в комплекте с проектом, скупой выбор моделей, некоторые из доступных LLM не работали в режиме RPC, все эмбеддинговые модели тоже отказывались запускаться в таком режиме, и так далее и тому подобное. Повозившись ещё немного, полез в исходники и обнаружил упоминание проекта llama.cpp , затем нашёл вызов бинарника rpc-server . И вот я оказался на странице llama.cpp/examples/rpc и всё заверте...</p><p><a href="https://habr.com/ru/articles/843372/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/843372/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/docker" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>docker</span></a> <a href="https://zhub.link/tags/llamacpp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llamacpp</span></a> <a href="https://zhub.link/tags/rpc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rpc</span></a> <a href="https://zhub.link/tags/dockerhub" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dockerhub</span></a> <a href="https://zhub.link/tags/gguf" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gguf</span></a> <a href="https://zhub.link/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> <a href="https://zhub.link/tags/api" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>api</span></a></p>
michabbb<p>Jina Al just released Jina ColBERT v2, a Multilingual Late Interaction Retriever for <a href="https://social.vivaldi.net/tags/Embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embedding</span></a> and <a href="https://social.vivaldi.net/tags/Reranking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Reranking</span></a>. The new model supports 89 languages with superior retrieval performance, user-controlled output dimensions, and 8192 token-length. </p><p><a href="https://jina.ai/news/jina-colbert-v2-multilingual-late-interaction-retriever-for-embedding-and-reranking/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">jina.ai/news/jina-colbert-v2-m</span><span class="invisible">ultilingual-late-interaction-retriever-for-embedding-and-reranking/</span></a></p><p><a href="https://social.vivaldi.net/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://social.vivaldi.net/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a></p>
Jonathan Bailey<p>The server test has enabled users who embed content to skirt copyright infringement. However, the 2007 ruling faces another major challenge.</p><p><a href="https://www.plagiarismtoday.com/2024/08/08/the-server-test-suffers-a-major-blow/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">plagiarismtoday.com/2024/08/08</span><span class="invisible">/the-server-test-suffers-a-major-blow/</span></a></p><p><a href="https://mastodon.world/tags/Copyright" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copyright</span></a> <a href="https://mastodon.world/tags/Embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embedding</span></a> <a href="https://mastodon.world/tags/SocialMedia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SocialMedia</span></a> <a href="https://mastodon.world/tags/NinthCircuit" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NinthCircuit</span></a></p>
Inautilo<p><a href="https://mastodon.social/tags/Development" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Development</span></a> <a href="https://mastodon.social/tags/Techniques" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Techniques</span></a><br>External, styleable, and scalable SVGs · SVG embeddings that leave little to be desired <a href="https://ilo.im/15zn1a" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ilo.im/15zn1a</span><span class="invisible"></span></a></p><p>_____<br><a href="https://mastodon.social/tags/VectorGraphic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VectorGraphic</span></a> <a href="https://mastodon.social/tags/SVG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SVG</span></a> <a href="https://mastodon.social/tags/Embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embedding</span></a> <a href="https://mastodon.social/tags/WebPage" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WebPage</span></a> <a href="https://mastodon.social/tags/WebDev" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WebDev</span></a> <a href="https://mastodon.social/tags/Frontend" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Frontend</span></a> <a href="https://mastodon.social/tags/HTML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HTML</span></a> <a href="https://mastodon.social/tags/CSS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CSS</span></a> <a href="https://mastodon.social/tags/CustomProperty" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CustomProperty</span></a></p>
Inautilo<p><a href="https://mastodon.social/tags/Development" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Development</span></a> <a href="https://mastodon.social/tags/Pitfalls" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Pitfalls</span></a><br>YouTube embeds are bananas heavy · Lighter ways to add YouTube videos on your website <a href="https://ilo.im/15zdd6" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ilo.im/15zdd6</span><span class="invisible"></span></a></p><p>_____<br><a href="https://mastodon.social/tags/Video" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Video</span></a> <a href="https://mastodon.social/tags/Youtube" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Youtube</span></a> <a href="https://mastodon.social/tags/Embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embedding</span></a> <a href="https://mastodon.social/tags/WebComponent" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WebComponent</span></a> <a href="https://mastodon.social/tags/ProgressiveEnhancement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ProgressiveEnhancement</span></a> <a href="https://mastodon.social/tags/WebPerf" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WebPerf</span></a> <a href="https://mastodon.social/tags/WebDev" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WebDev</span></a> <a href="https://mastodon.social/tags/Frontend" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Frontend</span></a> <a href="https://mastodon.social/tags/HTML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HTML</span></a> <a href="https://mastodon.social/tags/JavaScript" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JavaScript</span></a></p>
timvw<p>Gave <a href="https://ollama.com/avr/sfr-embedding-mistral" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ollama.com/avr/sfr-embedding-m</span><span class="invisible">istral</span></a> a spin but took way to long (+3hours) to generate 5K embeddings on my m3 pro (32gb).. <a href="https://fosstodon.org/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://fosstodon.org/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> <a href="https://fosstodon.org/tags/ollama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ollama</span></a></p>