mastodon.online is one of the many independent Mastodon servers you can use to participate in the fediverse.
A newer server operated by the Mastodon gGmbH non-profit

Server stats:

11K
active users

#pytorch

2 posts2 participants0 posts today
HGPU group<p>PyGraph: Robust Compiler Support for CUDA Graphs in PyTorch</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a></p><p><a href="https://hgpu.org/?p=29838" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29838</span><span class="invisible"></span></a></p>
sc0v0ne<p>MCP, Agentic Knowledge Graphs &amp; AI Models: Solving Conversational Analytics</p><p><a href="https://www.eventbrite.com/e/mcp-agentic-knowledge-graphs-ai-models-solving-conversational-analytics-tickets-1304648411519?aff=ebemoffollowpublishemail&amp;ref=eemail&amp;utm_campaign=following_published_event&amp;utm_content=follow_notification&amp;utm_medium=email&amp;utm_source=eventbrite" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">eventbrite.com/e/mcp-agentic-k</span><span class="invisible">nowledge-graphs-ai-models-solving-conversational-analytics-tickets-1304648411519?aff=ebemoffollowpublishemail&amp;ref=eemail&amp;utm_campaign=following_published_event&amp;utm_content=follow_notification&amp;utm_medium=email&amp;utm_source=eventbrite</span></a></p><p>In this free webinar led by ex-Snowflake, Cloudera, and Amazon leaders, we'll unveil how cutting-edge LLMs (GPT 4.5, Sonnet 3.7, Deepseek V3/R1, Gemini 2.5, etc.) are revolutionizing data products. <br><a href="https://mastodon.social/tags/python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>python</span></a> <a href="https://mastodon.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://mastodon.social/tags/deeplearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deeplearning</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/developer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>developer</span></a> <a href="https://mastodon.social/tags/dev" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dev</span></a> <a href="https://mastodon.social/tags/devsecops" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>devsecops</span></a> <a href="https://mastodon.social/tags/devops" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>devops</span></a> <a href="https://mastodon.social/tags/mlops" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mlops</span></a> <a href="https://mastodon.social/tags/learn" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>learn</span></a> <a href="https://mastodon.social/tags/learning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>learning</span></a> <a href="https://mastodon.social/tags/study" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>study</span></a> <a href="https://mastodon.social/tags/git" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>git</span></a> <a href="https://mastodon.social/tags/github" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>github</span></a> <a href="https://mastodon.social/tags/codeberg" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>codeberg</span></a> <a href="https://mastodon.social/tags/tensorflow" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensorflow</span></a> <a href="https://mastodon.social/tags/pytorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pytorch</span></a> <a href="https://mastodon.social/tags/jax" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>jax</span></a> <a href="https://mastodon.social/tags/huggingface" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>huggingface</span></a> <a href="https://mastodon.social/tags/linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linux</span></a> <a href="https://mastodon.social/tags/ubuntu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ubuntu</span></a> <a href="https://mastodon.social/tags/popos" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>popos</span></a> <a href="https://mastodon.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a></p>
fsolt<p><strong>Torch Lens Maker</strong></p><p><a href="https://victorpoughon.github.io/torchlensmaker" rel="nofollow noopener" target="_blank">https://victorpoughon.github.io/torchlensmaker</a> – Librería en Python para el diseño de elementos ópticos utilizando modelos de optimización con PyTorch </p>
LavX News<p>Unlocking Deep Learning Performance: Understanding Compute, Memory Bandwidth, and Overhead</p><p>In the quest for optimizing deep learning models, developers often resort to trial-and-error methods that can lead to suboptimal performance. This article explores the critical factors affecting deep ...</p><p><a href="https://news.lavx.hu/article/unlocking-deep-learning-performance-understanding-compute-memory-bandwidth-and-overhead" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/unlocking</span><span class="invisible">-deep-learning-performance-understanding-compute-memory-bandwidth-and-overhead</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodon.cloud/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.cloud/tags/OperatorFusion" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OperatorFusion</span></a></p>
Kyle Taylor<p>Making the rounds. Worth a reshare on the fedi</p><p>... This post is a long form essay version of a talk about PyTorch internals given at the PyTorch NYC meetup on May 14, 2019....</p><p>[1] <a href="https://blog.ezyang.com/2019/05/pytorch-internals/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.ezyang.com/2019/05/pytorc</span><span class="invisible">h-internals/</span></a></p><p><a href="https://hostux.social/tags/pytorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pytorch</span></a> <a href="https://hostux.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a></p>
st1nger :unverified: 🏴‍☠️ :linux: :freebsd:<p><a href="https://infosec.exchange/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> internals (2019) <a href="https://infosec.exchange/tags/python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>python</span></a> <a href="https://infosec.exchange/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a> <a href="https://infosec.exchange/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://infosec.exchange/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://infosec.exchange/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://blog.ezyang.com/2019/05/pytorch-internals/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.ezyang.com/2019/05/pytorc</span><span class="invisible">h-internals/</span></a></p>
N-gated Hacker News<p>Ah, nothing screams "exciting" like a deep dive into <a href="https://mastodon.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> internals! 🎉 Let's unravel the mysteries of <a href="https://mastodon.social/tags/tensors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensors</span></a>, because who doesn't love a bedtime story about C codebases? 💤 Spoiler: it's as thrilling as watching paint dry, but with extra parentheses. 🤓<br><a href="https://blog.ezyang.com/2019/05/pytorch-internals/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.ezyang.com/2019/05/pytorc</span><span class="invisible">h-internals/</span></a> <a href="https://mastodon.social/tags/CCodebase" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CCodebase</span></a> <a href="https://mastodon.social/tags/DeepDive" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepDive</span></a> <a href="https://mastodon.social/tags/TechHumor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechHumor</span></a> <a href="https://mastodon.social/tags/ProgrammingInsights" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ProgrammingInsights</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ngated</span></a></p>
Hacker News<p>PyTorch Internals: Ezyang's Blog</p><p><a href="https://blog.ezyang.com/2019/05/pytorch-internals/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.ezyang.com/2019/05/pytorc</span><span class="invisible">h-internals/</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.social/tags/Internals" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Internals</span></a> <a href="https://mastodon.social/tags/Ezyang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ezyang</span></a> <a href="https://mastodon.social/tags/Blog" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Blog</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p>
LavX News<p>Unlocking AI's Potential with Scallop: The Neurosymbolic Programming Revolution</p><p>Scallop, a groundbreaking declarative language, merges symbolic reasoning with AI applications, offering developers a powerful tool to enhance their machine learning models. By integrating seamlessly ...</p><p><a href="https://news.lavx.hu/article/unlocking-ai-s-potential-with-scallop-the-neurosymbolic-programming-revolution" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/unlocking</span><span class="invisible">-ai-s-potential-with-scallop-the-neurosymbolic-programming-revolution</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.cloud/tags/NeurosymbolicAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeurosymbolicAI</span></a> <a href="https://mastodon.cloud/tags/Scallop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Scallop</span></a></p>
nearshorecyber<p>AI/ML Engineers - Check out this job in Mexico City!</p><p>👉 Sr. ML Engineer (Prioritization Engine) - Hybrid with salary of CDMX $140,000 to 150,000 MEX pesos per month<br> <a href="https://www.careers-page.com/nearshore-cyber/job/QWWY6846" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">careers-page.com/nearshore-cyb</span><span class="invisible">er/job/QWWY6846</span></a> </p><p><a href="https://mastodon.social/tags/Sagemaker" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sagemaker</span></a> <a href="https://mastodon.social/tags/ClearML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ClearML</span></a> <a href="https://mastodon.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a></p>
N-gated Hacker News<p>🤔 Ah, the riveting world of "differentiable geometric optics in PyTorch"—because nothing screams excitement like a virtual optometrist in your <a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a>. 📈🔍 Here's a wild idea: instead of pondering the geometric optics through tiny lenses, maybe take a look through your real-world windows and donate to the author while you're at it. 🤑👓<br><a href="https://victorpoughon.github.io/torchlensmaker/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">victorpoughon.github.io/torchl</span><span class="invisible">ensmaker/</span></a> <a href="https://mastodon.social/tags/differentiableOptics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>differentiableOptics</span></a> <a href="https://mastodon.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.social/tags/virtualOptometrist" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>virtualOptometrist</span></a> <a href="https://mastodon.social/tags/geometricOptics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>geometricOptics</span></a> <a href="https://mastodon.social/tags/donations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>donations</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ngated</span></a></p>
Hacker News<p>Torch Lens Maker – Differentiable Geometric Optics in PyTorch</p><p><a href="https://victorpoughon.github.io/torchlensmaker/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">victorpoughon.github.io/torchl</span><span class="invisible">ensmaker/</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/TorchLensMaker" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TorchLensMaker</span></a> <a href="https://mastodon.social/tags/DifferentiableOptics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DifferentiableOptics</span></a> <a href="https://mastodon.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.social/tags/ComputerVision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputerVision</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a></p>
nearshorecyber<p>AI/ML Engineers - Check out this job in Mexico City!</p><p>👉 Sr. ML Engineer (Prioritization Engine) - Hybrid with salary of CDMX $140,000 to 150,000 MEX pesos per month<br> <a href="https://www.careers-page.com/nearshore-cyber/job/QWWY6846" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">careers-page.com/nearshore-cyb</span><span class="invisible">er/job/QWWY6846</span></a> </p><p><a href="https://infosec.exchange/tags/Sagemaker" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sagemaker</span></a> <a href="https://infosec.exchange/tags/ClearML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ClearML</span></a> <a href="https://infosec.exchange/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> <a href="https://infosec.exchange/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> <a href="https://infosec.exchange/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a></p>
Titus von der Malsburg 📖👀💭<p>I'd like to buy <a href="https://scholar.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> cloud compute, with SSH access for <a href="https://scholar.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> inference (<a href="https://scholar.social/tags/pytorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pytorch</span></a>). 32GB GPU memory is enough because I'm working with smaller models. Any recommendations?</p>
Habr<p>Пишем свой Transformer</p><p>Чтобы поупражняться я решила более детально разобраться и попробовать самостоятельно написать Transformer на PyTorch. Результатом захотелось поделиться здесь. Надеюсь, так же как и мне, это поможет доразобраться в данной архитектуре и ответить на какие-то вопросы.</p><p><a href="https://habr.com/ru/articles/891972/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/891972/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/transformer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>transformer</span></a> <a href="https://zhub.link/tags/attention" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>attention</span></a> <a href="https://zhub.link/tags/pytorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pytorch</span></a></p>
Habr<p>Десять уроков развития аппаратных ускорителей для ИИ: как эволюция TPU привела к созданию TPUv4i</p><p>В последние годы стало очевидно, что классические центральные процессоры (CPU) и видеокарты (GPU) уже не всегда поспевают за непрерывным ростом и усложнением нейронных сетей. Вместо бесконечного наращивания «универсального» железа, компании начали разрабатывать и внедрять в своих дата-центрах Domain-Specific Architecture (DSA) — аппаратные ускорители, заточенные под конкретные задачи. Google TPU (Tensor Processing Unit) — одно из первых крупных решений такого рода. Начиная с 2015 года (поколение TPUv1), Google успела вывести на рынок несколько поколений TPU для внутренних нужд: TPUv1 и TPUv2/v3, а в 2020 году — новое решение TPUv4i . Если первые версии TPU были ориентированы исключительно на ускорение инференса (выполнение уже обученных моделей), то TPUv2 и TPUv3 смогли взять на себя ещё и тренировку крупных нейросетей. Но в дальнейшем выяснилось, что для оптимальной работы дата-центров в масштабах Google рациональнее разделить решения для тренировки и инференса. TPUv4i — это результат учёта многих уроков и ограничений, проявившихся в предыдущих чипах. В этом материале разберём, какие «десять уроков» сформировали подход Google к созданию TPUv4i , что это за архитектура и какие проблемы дата-центров она решает.</p><p><a href="https://habr.com/ru/articles/892102/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/892102/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/ml" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ml</span></a> <a href="https://zhub.link/tags/pytorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pytorch</span></a> <a href="https://zhub.link/tags/proceesors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>proceesors</span></a> <a href="https://zhub.link/tags/deep_learning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deep_learning</span></a> <a href="https://zhub.link/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a></p>
LavX News<p>TorchServe: The Future of PyTorch Model Deployment Faces Limited Maintenance</p><p>TorchServe, a pivotal tool for serving PyTorch models in production, has announced that it is no longer actively maintained. This development raises concerns about the future of model serving in AI ap...</p><p><a href="https://news.lavx.hu/article/torchserve-the-future-of-pytorch-model-deployment-faces-limited-maintenance" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/torchserv</span><span class="invisible">e-the-future-of-pytorch-model-deployment-faces-limited-maintenance</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> <a href="https://mastodon.cloud/tags/TorchServe" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TorchServe</span></a> <a href="https://mastodon.cloud/tags/ModelServing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ModelServing</span></a></p>
Towards Data Science<p>Is your metric collection slowing down your training? Chaim Rand explores how inefficient metric computation can impact performance and provides optimization techniques using TorchMetrics and <a href="https://hachyderm.io/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> Profiler. </p><p><a href="https://towardsdatascience.com/efficient-metric-collection-in-pytorch-avoiding-the-performance-pitfalls-of-torchmetrics/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">towardsdatascience.com/efficie</span><span class="invisible">nt-metric-collection-in-pytorch-avoiding-the-performance-pitfalls-of-torchmetrics/</span></a></p>
Dantali0n :arch: :i3:<p>I wish I had a GPU where <a href="https://fosstodon.org/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> would actually work on.</p><p>But after an extensive battle with <a href="https://fosstodon.org/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> and HSA_OVERRIDE_GFX_VERSION. </p><p>I have given up. Seems PyTorch on RDNA1 is out of the question.</p>
Dmitry Tantsur<p>Aha, roc* stuff is coming from <a href="https://rocm.docs.amd.com/en/latest" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">rocm.docs.amd.com/en/latest</span><span class="invisible"></span></a> which is something AMD-specific. Same for miopen.</p><p>I&#39;m pretty annoyed that they are hard dependencies even when I don&#39;t have the hardware. Like, it&#39;s fine for several megabytes, but we&#39;re talking about ~20 GiB of dead weight (that makes <a href="https://mastodon.online/tags/PyTorch" class="mention hashtag" rel="tag">#<span>PyTorch</span></a> not installable on my humble root partition).</p><p>It does not look like PyTorch actually requires these unconditionally. Another bug to file?</p><p><a href="https://mastodon.online/tags/Fedora" class="mention hashtag" rel="tag">#<span>Fedora</span></a></p>