mastodon.online is one of the many independent Mastodon servers you can use to participate in the fediverse.
A newer server operated by the Mastodon gGmbH non-profit

Server stats:

11K
active users
Juan Fumero<p>Hardware <a href="https://mastodon.online/tags/Acceleration" class="mention hashtag" rel="tag">#<span>Acceleration</span></a> for <a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="tag">#<span>Java</span></a> Ray Tracing: <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a> in Action</p><p><a href="https://www.youtube.com/watch?v=7q9AGvpZ4Hw" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="">youtube.com/watch?v=7q9AGvpZ4Hw</span><span class="invisible"></span></a></p>
Juan Fumero<p>New blogpost: Learn how to build JDK 21 and JDK 25 with the HotSpot Disassembler (HSDIS) plugin enabled for Linux. `hsdis` is a tool to inspect the JVM’s JIT-compiled assembly code, and this post explains how to configure it:</p><p>🔗 <a href="https://jjfumero.github.io/posts/2025/02/14/jdk-hsdis-build" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">jjfumero.github.io/posts/2025/</span><span class="invisible">02/14/jdk-hsdis-build</span></a> </p><p><a href="https://mastodon.online/tags/java" class="mention hashtag" rel="tag">#<span>java</span></a> <a href="https://mastodon.online/tags/hsdis" class="mention hashtag" rel="tag">#<span>hsdis</span></a> <a href="https://mastodon.online/tags/openjdk" class="mention hashtag" rel="tag">#<span>openjdk</span></a></p>
Juan Fumero<p>New Article: <a href="https://mastodon.online/tags/Babylon" class="mention hashtag" rel="tag">#<span>Babylon</span></a> OpenJDK - A Guide for Beginners and Comparisons with <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a></p><p>🔗<a href="https://jjfumero.github.io/posts/2025/02/07/babylon-and-tornadovm" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">jjfumero.github.io/posts/2025/</span><span class="invisible">02/07/babylon-and-tornadovm</span></a></p><p><a href="https://mastodon.online/tags/java" class="mention hashtag" rel="tag">#<span>java</span></a> <a href="https://mastodon.online/tags/ai" class="mention hashtag" rel="tag">#<span>ai</span></a> <a href="https://mastodon.online/tags/gpus" class="mention hashtag" rel="tag">#<span>gpus</span></a> <a href="https://mastodon.online/tags/openjdk" class="mention hashtag" rel="tag">#<span>openjdk</span></a></p>
Juan Fumero<p>How to Fix CUDA GCC Unsuported Versions on Linux</p><p>🔗 <a href="https://jjfumero.github.io/posts/2025/01/16/cuda-gcc-versions" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">jjfumero.github.io/posts/2025/</span><span class="invisible">01/16/cuda-gcc-versions</span></a></p>
Juan Fumero<p>New post: Learn how to setup WSL for GPU compute, including TornadoVM</p><p>🔗<a href="https://jjfumero.github.io/posts/2025/01/14/gpu-wsl" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">jjfumero.github.io/posts/2025/</span><span class="invisible">01/14/gpu-wsl</span></a></p>
Ian Brown :verified:<p>It has been a busy year, but I was able to finally get some downtime over the holidays to finish this book. </p><p>It is a great introduction to the core concepts of programming with GPUs and other co-processors, and surveys the future directions that managed runtimes like the <a href="https://mastodon.hccp.org/tags/JVM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JVM</span></a> will take in order to simplify developing software on heterogeneous hardware environments.</p><p>It was super accessible, and I'd recommend this for anyone interested in the subject. Even managers! </p><p><a href="https://mastodon.hccp.org/@igb/112386562916088384" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">mastodon.hccp.org/@igb/1123865</span><span class="invisible">62916088384</span></a></p>
Ian Brown<p>Finally found the downtime to complete this fantastic survey of managed runtimes (e.g. the JVM) and heterogeneous hardware (e.g. CPUs and GPUs or FPGAs) by <a href="https://mastodon.online/users/snatverk" rel="nofollow noopener" target="_blank">@snatverk@mastodon.online</a>, <a href="https://mastodon.sdf.org/users/thanos_str" rel="nofollow noopener" target="_blank">@thanos_str@mastodon.sdf.org</a>, and <a href="https://mastodon.online/users/kotselidis" rel="nofollow noopener" target="_blank">@kotselidis@mastodon.online</a>. </p> <p>Required reading for those who want a look at the future of software development.</p> <p><a href="https://books.hccp.org/hashtag/177" rel="nofollow noopener" target="_blank">#TornadoVM</a> <a href="https://books.hccp.org/hashtag/184" rel="nofollow noopener" target="_blank">#JOCL</a> <a href="https://books.hccp.org/hashtag/175" rel="nofollow noopener" target="_blank">#OpenCL</a> <a href="https://books.hccp.org/hashtag/176" rel="nofollow noopener" target="_blank">#CUDA</a></p><p>(comment on <a href="https://books.hccp.org/book/31088" rel="nofollow noopener" target="_blank">"Programming Heterogeneous Hardware via Managed Runtime Systems"</a>)</p>
Juan Fumero<p>TornadoVM Performance vs OpenCL for Matrix Multiplication on NVIDIA RTX 4090.</p><p>I have been working on this video for a while, and now it is here! Performance analysis of <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a> vs <a href="https://mastodon.online/tags/OpenCL" class="mention hashtag" rel="tag">#<span>OpenCL</span></a> native. Plenty of Java, parallelism, OpenCL, and GPU programming.</p><p><a href="https://www.youtube.com/watch?v=xj8Te517Wtc" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="">youtube.com/watch?v=xj8Te517Wtc</span><span class="invisible"></span></a></p>
Juan Fumero<p>Want to know the capabilities of <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a> on RTX 4090? Do you know that for some applications, TornadoVM is faster than native OpenCL? Is that even possible? If you are curious, I explore all of these questions in my latest blog article. </p><p>Using Matrix Multiplication as an example, I dive in into all optimisations that the TornadoVM JIT compiler and runtime performs to make the code fast on NVIDIA GPUs. How easy/hard is to beat <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a>? Check it out! </p><p><a href="https://jjfumero.github.io/posts/2024/12/17/tornadovm-vs-opencl" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">jjfumero.github.io/posts/2024/</span><span class="invisible">12/17/tornadovm-vs-opencl</span></a></p>
Juan Fumero<p>Interesting Read: &quot;Pascal: The Underrated Gem Among Programming Languages&quot; </p><p><a href="https://simplifycpp.org/?id=a0534" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">simplifycpp.org/?id=a0534</span><span class="invisible"></span></a></p><p>Pascal was my first programming language. I even implemented a mini-pascal compiler in Pascal during my Compilers course in the University.</p>
Juan Fumero<p>Running benchmarks with <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a> on a <a href="https://mastodon.online/tags/RISCV" class="mention hashtag" rel="tag">#<span>RISCV</span></a> SBC CPU with vector units target hardware accelerators via <a href="https://mastodon.online/tags/OCK" class="mention hashtag" rel="tag">#<span>OCK</span></a>. Very promising results -&gt; 11x over Java Sequential and 4.5x over Java Parallel Streams on the same CPU. More info very soon!</p>
Juan Fumero<p>The Level Zero JNI library is open sourced, and we implemented this library to dispatch SPIR-V kernels in TornadoVM. However, TornadoVM could implement shared memory buffers, and get extra performance on shared memory systems.</p><p>So, what&#39;s the catch? The Level-Zero JNI contains hand-written kernels with explicit runtime calls to manage and dispatch the code on GPUs, while in TornadoVM, well, it&#39;s parallelized and automatically accelerated from the Java sequential code.</p>
Juan Fumero<p>Java <a href="https://mastodon.online/tags/Llama2" class="mention hashtag" rel="tag">#<span>Llama2</span></a> fork extended with GPU support using Level Zero JNI lib to run on Intel ARC and Integrated GPUs. The initial version from my colleague Michalis Papadimitriou includes <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a>.</p><p>The level-zero port achieves higher tok/s vs TornadoVM on Integrated GPUs.</p><p>🔗 <a href="https://github.com/jjfumero/llama2.tornadovm.java" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">github.com/jjfumero/llama2.tor</span><span class="invisible">nadovm.java</span></a></p>
Juan Fumero<p>We just drop a new TornadoVM version, 1.0.8 with improvements and many fixes. This version expands the profiler of the LevelZero/SPIRV backend with power metrics, new API calls for log and debug execution plans, fixes for running on OSx 14.6 and more! </p><p>Check it out!<br />🔗 <a href="https://github.com/beehive-lab/TornadoVM/releases/tag/v1.0.8" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">github.com/beehive-lab/Tornado</span><span class="invisible">VM/releases/tag/v1.0.8</span></a></p><p><a href="https://mastodon.online/tags/java" class="mention hashtag" rel="tag">#<span>java</span></a> <a href="https://mastodon.online/tags/ai" class="mention hashtag" rel="tag">#<span>ai</span></a> <a href="https://mastodon.online/tags/gpus" class="mention hashtag" rel="tag">#<span>gpus</span></a> <a href="https://mastodon.online/tags/fpgas" class="mention hashtag" rel="tag">#<span>fpgas</span></a> <a href="https://mastodon.online/tags/accelerators" class="mention hashtag" rel="tag">#<span>accelerators</span></a> <a href="https://mastodon.online/tags/graalvm" class="mention hashtag" rel="tag">#<span>graalvm</span></a></p>
Juan Fumero<p>Join me at the UXL oneAPI DevSummit on the 10th October 2024 </p><p>🔗 <a href="https://www.oneapi.io/events/oneapi-devsummit-hosted-by-uxl-foundation/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">oneapi.io/events/oneapi-devsum</span><span class="invisible">mit-hosted-by-uxl-foundation/</span></a></p>
Juan Fumero<p>[Blog] Using oneAPI Construction Kit and hashtag#TornadoVM to accelerate Java Programs on x86, ARM and RISC-V CPUs. In this blog, we explore how OCK can be used by TornadoVM to run on different CPU architectures.</p><p>This post shows how to setup OCK to be used with TornadoVM for Intel, ARM Neoverse V2 (NVIDIA Grace Hopper Superchip) and RISC-V CPUs. Besides, it shows a performance evaluation of TornadoVM/OCK compared to Java Parallel Streams running on the same CPU. </p><p>🔗 <a href="https://jjfumero.github.io/posts/2024/09/10/tornadovm-ock" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">jjfumero.github.io/posts/2024/</span><span class="invisible">09/10/tornadovm-ock</span></a></p>
Juan Fumero<p>We are improving the driver support for new hardware accelerators for <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a>. With the help of <br />Codeplay and the AERO EU Project, we are now able to run Java programs on <a href="https://mastodon.online/tags/RISCV" class="mention hashtag" rel="tag">#<span>RISCV</span></a> simulators using a fully open source software stack:</p><p>1️⃣ oneAPI Construction Kit<br />2⃣TornadoVM</p>
Juan Fumero<p>The good thing about the OCK is that it is a potential CPU implementation of OpenCL, not just for x86 CPUs, but also for ARM and RISC-V accelerators, and even custom accelerators. Stay tuned!</p>
Juan Fumero<p>Thanks to Codeplay Software and the AERO Project, we are able to accelerate Java programs on CPUs via the <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="tag">#<span>TornadoVM</span></a> framework and the oneAPI Construction Kit (OCK).</p><p>Acceleration of 3.4x over the execution with Java Streams on the same CPU (32 cores) for running Matrix Multiplications of size 1024x1024. </p><p>I will probably write a detailed blog post about how to use it and some performance numbers. </p><p><a href="https://mastodon.online/tags/java" class="mention hashtag" rel="tag">#<span>java</span></a> <a href="https://mastodon.online/tags/acceleration" class="mention hashtag" rel="tag">#<span>acceleration</span></a> <a href="https://mastodon.online/tags/ai" class="mention hashtag" rel="tag">#<span>ai</span></a> <a href="https://mastodon.online/tags/cpus" class="mention hashtag" rel="tag">#<span>cpus</span></a> <a href="https://mastodon.online/tags/gpus" class="mention hashtag" rel="tag">#<span>gpus</span></a></p>
Lindsey Kuper<p>We're seeking speakers for this year's Languages, Systems, and Data Seminar! 🎉</p><p>We meet on Fridays in fall, winter, and spring, in person and on Zoom. Most of our speakers are PhD students working in the areas of programming languages, systems, databases, etc.</p><p>If you've never given an LSD talk and you'd like to, nominate yourself: <a href="https://forms.gle/jGN6qUqKrQbABJvo8" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">forms.gle/jGN6qUqKrQbABJvo8</span><span class="invisible"></span></a></p><p>We try not to have repeats, so if you *have* given an LSD talk before, pass the form along to a friend who you think would give a good talk!</p>