mastodon.online is one of the many independent Mastodon servers you can use to participate in the fediverse.
A newer server operated by the Mastodon gGmbH non-profit

Server stats:

11K
active users

As I get ready for this year's Maker Faire, I can't help but be nostalgic for the first years of that amazing event, and what we did in those years. I really need to work more on documenting it. Some of the best things I've ever done took place at the Maker Faire :D

kristinhenry.medium.com/galaxy

Medium · GalaxyGoo’s Cell Project and the Maker Faire - Kristin Henry - MediumBy Kristin Henry

Takashi Kawase has released a #linux version of VVDViewer (github.com/JaneliaSciComp/VVDV), a powerful volume renderer (#scivis, #sciviz) for #fluorescence #microscopy data. To celebrate, here is a video rendered by VVDViewer as driven by the Linux version of neuVid. The input to neuVid is a high-level description of the video, in this case showing four #drosophila lines from the #HHMIJanelia FlyLight Gen1 MCFO collection. The input text is pretty simple... (1/3)

Just realized I never posted an #introduction, so here we go:

Hi 👋 my name is Hagen and currently I'm pursuing my PhD in computer science. My research interests include Information Visualization, Scientific Visualization techniques, and Computer Graphics.

Please check my personal website for more details on my work and publications and feel free to say hi!

Prompted by @tess_machling - more messing about with #lidar data from the UK Environment Agency's National LIDAR Programme, this time with renders of the area around #Snettisham, #Norfolk.

Landscape is ... quite flat, so I experimented with lighting and surface shading to accentuate even the most subtle surface details.

Made with GDAL, Photoshop and Modo - it's definitely not a standard lidar pipeline...

Jumbo-sized renders:

hylobatidae.org/misc/lidar/sne

hylobatidae.org/misc/lidar/sne

What is the state of the art in #deeplearning for volume rendering of scientific data (#scivis, #sciviz)? There are interesting ideas in this paper by Weiss and Navab, but I feel there is more to do:

arxiv.org/abs/2106.05429

arXiv.orgDeep Direct Volume Rendering: Learning Visual Feature Mappings From Exemplary ImagesVolume Rendering is an important technique for visualizing three-dimensional scalar data grids and is commonly employed for scientific and medical image data. Direct Volume Rendering (DVR) is a well established and efficient rendering algorithm for volumetric data. Neural rendering uses deep neural networks to solve inverse rendering tasks and applies techniques similar to DVR. However, it has not been demonstrated successfully for the rendering of scientific volume data. In this work, we introduce Deep Direct Volume Rendering (DeepDVR), a generalization of DVR that allows for the integration of deep neural networks into the DVR algorithm. We conceptualize the rendering in a latent color space, thus enabling the use of deep architectures to learn implicit mappings for feature extraction and classification, replacing explicit feature design and hand-crafted transfer functions. Our generalization serves to derive novel volume rendering architectures that can be trained end-to-end directly from examples in image space, obviating the need to manually define and fine-tune multidimensional transfer functions while providing superior classification strength. We further introduce a novel stepsize annealing scheme to accelerate the training of DeepDVR models and validate its effectiveness in a set of experiments. We validate our architectures on two example use cases: (1) learning an optimized rendering from manually adjusted reference images for a single volume and (2) learning advanced visualization concepts like shading and semantic colorization that generalize to unseen volume data. We find that deep volume rendering architectures with explicit modeling of the DVR pipeline effectively enable end-to-end learning of scientific volume rendering tasks from target images.

Those interested in scientific visualization (#scivis, #sciviz, #datavis, #dataviz) may enjoy the videos in the #HHMIJanelia Scientific Visualization Interest Group playlist:

youtube.com/playlist?list=PLfB

These videos include talks from the creators of systems like Agave, ANARI, Datoviz, #napari, Neuroglancer, PyGfx, VizPy, and VMD. Thanks to @billkatz for organizing the series.