All Posts By

shawn Zindroski

SNL Creative’s Mobile 3D Scanning for BTS and Line Friends

By Entertainement, Uncategorized

At SNL Creative, we recently worked with Line Friends to capture BTS hand and signature impressions tiles for their new pop-up store in Los Angeles. Using our ATOS photogrammetry mobile 3D scanning technology, we captured accurate 3D data of the hand print tiles, which were then used to create a stunning and immersive experience for fans of the popular K-pop group.

3d scanningLine Friends is a global character brand that creates and distributes various products featuring popular characters including BTS. For their new pop-up store in Los Angeles, Line Friends wanted to create a unique and interactive experience for BTS fans, and they enlisted our services to help bring their vision to life.

Using our mobile 3D scanning technology, we captured accurate 3D data of the BTS hand print tiles. Our team worked quickly and efficiently, ensuring that we captured high-quality data while minimizing any disruption to the store. Once the data was captured, we processed it using our specialized software to create a 3D model of the hand-print tiles.

The 3D model was then used to create an immersive experience for fans of BTS. Visitors to the pop-up store can view the 3D Printed model of the hand print tiles in stunning detail, allowing them to feel like they were right there with the BTS members.

At SNL Creative, we are proud to have been a part of such an exciting project. Our mobile 3D scanning technology allowed us to capture accurate data quickly and efficiently, and our post-processing services ensured that the data was clean and ready for use for the  Fans experience. We look forward to working with Line Friends and other clients in the future to create innovative and immersive experiences for their customers.

SNL Creative 3D Prints Critical Camera components for the filming of Avatar: Way of the Water

By Entertainement

SNL Creative, Inc. was approached by the Director of 3D Camera Technology from Lightstorm Entertainment and 20th Century Studios, Inc., for an additive manufacturing solution to their 3D Stereoscopic Camera Rigs used in the filming of Avatar: Way of the Water.

3d printing for film

The custom designed and fabricated systems include critical 3D Printed components such as the beamsplitter mirror box, camera tracking system enclosures, clamps and supports for the camera lenses in the final builds.

 

Looking for an alternative to aluminum fabrication for several live-action camera systems, the part designs required superior material strength, tolerance accuracy, black in color and a good surface finish, with a light-weight material to meet the 30lb weight requirement for the hand-held system.

SNL Creative met these challenges with our inhouse technologies using composite AM materials from Stratasys and Markforged.

 

SNL Creative demonstrates their expertise in composite 3D printing of end use parts for low volume productions.  Our FORTUS 450mc has upgraded hardware and extrusion head, allowing it to run the high-performance Nylon 12 Carbon Fiber polymer. Leveraging the MarkForged X7 for its ability to reenforced parts with Carbon Fiber and Kevlar infills. Replacing heavy weight aluminum with Additive Nylon Carbon Fiber, reducing manufacturing costs and lead times.

SNL Creative and how we help California’s Automotive Design Community

By Automotive, Uncategorized

Additive manufacturing, or 3D printing, has been a game-changer for the automotive industry in recent years. From the design stage to production and beyond, 3D printing has proven to be a valuable tool for professionals at all automotive design and manufacturing levels.

One area where 3D printing has been beneficial is lightweight. By using 3D printing to create complex geometries and lattice structures, manufacturers can reduce the weight of parts without sacrificing strength or durability. This not only helps to improve fuel efficiency but also helps to reduce emissions, making vehicles more environmentally friendly.

Another key benefit of 3D printing is part consolidation. By combining multiple parts into a single printed component, manufacturers can simplify their supply chains and reduce production costs. This is particularly useful for low-volume production runs or for custom parts that would otherwise require costly tooling.

Work holding and tooling are also areas where 3D printing can benefit significantly. Manufacturers can improve their production processes and reduce lead times by using 3D printing to create custom jigs, fixtures, and tooling. 3D printed tools can also be easily modified or replaced as needed, which helps to minimize downtime and increase overall efficiency.

Customization is another area where 3D printing excels. By using 3D printing to create bespoke parts or components, manufacturers can offer greater flexibility to their customers. This is particularly useful for the automotive aftermarket, where customers may be looking for unique or hard-to-find parts.

Finally, 3D printing is a valuable tool for creating vehicle color, material, and finish (CMF) options. By using 3D printing to create prototypes and test different CMF combinations, designers can quickly arrive at the perfect solution. This saves time and money and helps ensure that vehicles are visually appealing and exciting for consumers.

In conclusion, additive manufacturing is revolutionizing the automotive industry in various ways. From lightweight to part consolidation, work holding to customization, and CMF design, 3D printing is providing automotive professionals with new tools and possibilities. As the technology continues to improve, it’s exciting to think about the ways in which it will continue to transform the industry in the years to come. Whether you’re a designer, engineer, or production line worker, it’s important to understand the potential of additive manufacturing and how it can help you achieve your goals.

SNL Creative Acquires EOS P396 SLS 3D Printing System

By 3d printing general
[Huntington Beach, CA] – SNL Creative Inc. is proud to announce the acquisition of a state-of-the-art EOS P396 Selective Laser Sintering (SLS) 3D Printing System. This advanced technology allows for creating of highly intricate and complex parts through laser sintering and melting various powdered production-grade materials.

sls 3d printing

The EOS P396 SLS Printing System is a significant addition to SNL Creative’s current technology offerings. It will significantly enhance the company’s ability to produce volume production components, highly intricate parts, and prototypes. The system’s advanced features, fully dense build in X, Y, and Z, high-speed printing, and monitoring capabilities provide our customers with the ability to produce parts faster and more cost-effectively while maintaining the highest accuracy and quality.

 

“We are thrilled to be able to offer this advanced technology to our customers, as it will allow us to scale AM production while delivering qualified results to our clients,” said CEO Lindsey Zindroski. “The acquisition of the EOS P396 SLS Printing System is a testament to our commitment to providing our customers with the most innovative and cutting-edge technology available in the industry.”

 

Some benefits of using Selective Laser Sintering (SLS) 3D printing technology include the following:

 

Complex geometries: Having no support structures. The SLS technology allows for the production of complex and intricate parts that may not be achievable using other 3d printing technologies and traditional manufacturing methods.

Scalable Production: taking advantage of the full build envelope gives the ability to scale volumes of part per build operations

Material options: SLS technology supports various materials, including nylon, glass-filled nylon, polystyrene, TPE, and TPU.

Certified Bio-Compatible Parts: SLS-printed parts are known for their strength and durability, making them ideal for end-use hardware and certified for medical applications.

Batch production: SLS technology enables the production of multiple parts in a single build, making it a cost-effective solution for small- to medium-scale production runs.

Post-processing: SLS parts can be further processed and finished, such as sandblasting, painting, or dyeing.

Reduced waste: Unlike traditional manufacturing methods, SLS technology produces minimal waste, reducing material costs and improving the overall sustainability of the manufacturing process.

About SNL Creative

SNL Creative is a leading provider of Additive Manufacturing Services. The company has a proven track record of delivering innovative and practical solutions to its customers. It is committed to staying at the forefront of technology and advancements in its field.

SNL Creative offers Stratasys J735 the latest in color 3d printing technology

By 3d printing general

Design and manufacturing are transforming, and 3D printing technology is at the forefront of this change. SNL Creative is part of the Stratasys preferred partner network. Stratasys, a leader in 3D printing, has published a whitepaper detailing the impact of this technology on the industry.

color 3d printing

3D printing, also known as additive manufacturing, allows designers and engineers to create complex shapes and structures in a matter of hours rather than the weeks or months required by traditional manufacturing methods. This speed and flexibility have opened new possibilities for product design and development, allowing companies to bring innovative new products to market faster and at a lower cost.

One of the key advantages of 3D printing is the ability to quickly and easily create prototypes. This saves time and allows designers to test and refine their ideas rapidly. With the ability to print in a wide range of materials, including plastics, metals, and even human tissue, designers can create accurate and functional prototypes that closely mimic the final product.

In addition, 3D printing is also revolutionizing the way that products are manufactured. Companies can quickly ramp up production and respond to changing market demands by eliminating the need for tooling and other traditional manufacturing methods. This also enables companies to produce low-volume, highly customized products, which is not feasible with conventional manufacturing methods.

3D printing is also having a profound impact on the supply chain. By allowing companies to produce products locally rather than relying on overseas manufacturing, 3D printing can reduce lead times and improve quality control. This can create new business opportunities and revitalize local manufacturing communities.

In conclusion, the revolution in design brought about by 3D printing is transforming the industry and providing new opportunities for companies to innovate and succeed. With its speed, flexibility, and versatility, 3D printing has the potential to change the way we design and manufacture products forever.

https://snlcreative.com/services/3d-printing/polyjet/

https://www.stratasys.com/en/resources/whitepapers/revolution-in-design/

 

AI Generated 3D Models from Images and Text Descriptions

By Artificial Intelligence

SNL Creative is testing new workflows to use AI programs like Dream Fusion, DALL-E 2D, Stable Diffusion, and Point-E are all AI-based image processing algorithms in beta or under development to convert 2D images into 3D models.

ai to 3d models

The process of converting 2D images into 3D geometry using AI is a relatively new field of research that has seen significant advancements in recent years. 3D rendering when was developed two years ago by using the power of neural networks to provide a photorealistic experience far superior to other technologies at that time, as stated in [1].

The problem of converting a 2D image to its original 3D scene is known as inverse graphics, which is challenging due to the relationship between 2D images and 3D shapes. However, with the advancements in AI, deep learning, and computer vision techniques, converting 2D images into 3D geometry is becoming more accurate, efficient, and cost-effective.

DreamFusion is an AI tool developed by researchers from Google that automatically transforms text prompts into full 3D models. It is an extension of software developed to perform text-to-image, which can generate detailed and realistic images from short descriptor sentences called prompts. DreamFusion is an expanded version of Dream Fields, a generative 3D system Google unveiled in 2021

DALL-E 2D is an AI-based system developed by OpenAI that can generate 3D models from 2D images. It uses a modified GLIDE model that incorporates projected CLIP text embeddings in two ways: by adding the CLIP text embeddings to GLIDE’s existing timestep embedding, and by creating four extra tokens of context, which are concatenated to the output sequence of the GLIDE text encoder. By using several million 3D objects and associated metadata. The model was trained with images and associated 3D objects, learning to generate corresponding point clouds from images. The system can be used to create 3D models directly from 2D images with minimal human intervention

Stable Diffusion is an algorithm that uses a generative model to convert 2D images into 3D models. The algorithm uses a combination of deep learning and computer vision techniques to analyze the image and create a 3D representation of the object. The algorithm is based on the idea of “diffusion,” which refers to the process of spreading information through a network.

Point E is a machine learning system that creates 3D models from text prompts. It works in two parts: first, it uses a text-to-image AI to convert a worded prompt into an image, then it uses a second function to turn that image into a 3D model. According to a paper published by the OpenAI team, Point-E can produce 3D models in minutes. The system was open sourced by OpenAI in December 2022, and it aims to provide a quick and efficient way to generate 3D models from text inputs. [1], [2], [3]

It’s worth noting that the quality and accuracy of the generated models heavily depend on the complexity of the input text, the quality of the training dataset, and the specific architecture and parameters used in the algorithm, so the output models might not be suitable for all use cases.

All four algorithms are in beta or still under development and research, but they show promising results in converting 2D images into 3D models. They can help automate the process of creating 3D models, making them faster, more accurate, and cost-effective, which can have applications in various industries such as gaming, animation, and architectural visualization.

Volumetric Neural Radiance Field (NeRF) [2]data is a type of 3D representation of an object or scene that is generated using a deep learning algorithm called Neural Radiance Field (NeRF). It is a volumetric representation, which means that it represents the object or scene as a 3D grid of voxels (3D pixels) rather than a surface mesh.

A NeRF model is trained on a dataset of 2D images and corresponding 3D models, and it learns to understand the relationship between the 2D image and the 3D shape. Once trained, the model can generate a 3D representation of an object or scene by analyzing a 2D image. The generated word is a continuous function that maps each point in 3D space to a feature vector describing the properties of the scene at that point.

Volumetric NeRF data can generate 3D models of objects or scenes with great detail and accuracy, even when the input images are taken from different viewpoints. This makes it useful for 3D reconstruction, virtual reality, and augmented reality applications.

One of the critical features of volumetric NeRF data is that it can be rendered from any viewpoint, unlike traditional surface-based 3D models, which can only be generated from a limited set of views. This allows for more flexibility and realism in 3D visualizations.

It’s worth noting that the quality and accuracy of the generated models heavily depend on the input images’ complexity, the training dataset’s quality, and the specific architecture and parameters used in the algorithm, so the output models might not be suitable for all use cases.

No alt text provided for this image

Neural 3D Mesh Renderer (N3MR) [3]of high-quality 3D textured shapes learned from images are a class of AI-based algorithms that use deep learning to generate 3D models from 2D images. They are designed to create highly detailed and accurate 3D models with realistic textures and lighting effects.

These models are trained on large datasets of 2D images and corresponding 3D models, and they learn to understand the relationship between the 2D image and the 3D shape. The models can then generate 3D models from new 2D photos by analyzing the image and creating a 3D representation of the object based on the learned relationship.

One example of this technology is “Neural 3D Mesh Renderer (N3MR),” a neural network-based algorithm that combines deep learning and computer vision techniques to generate 3D models from 2D images. It can develop high-quality 3D models of objects with realistic textures and lighting effects by learning the relationship between the 2D image and the 3D shape from a dataset of 2D pictures and 3D models.

This technology can have various applications in different industries, such as gaming, animation, and architectural visualization. It can help automate the creation of 3D models, making them faster, more accurate, and cost-effective. With the advancements in deep learning and computer vision techniques, the quality of the generated models is also increasing, making them more suitable for realistic rendering and visualization.

The accuracy of the generated models heavily depends on the input images’ complexity, the training dataset’s quality, and the specific architecture and parameters used in the algorithm, so the output models might only be suitable for some use cases.

No alt text provided for this image

Photogrammetry [4], The process of converting 2D images into 3D models is typically achieved through photogrammetry. Photogrammetry is a method of using photographs to measure and generate 3D models. The process involves taking multiple pictures of an object from different angles and using specialized software to analyze and process these images. The software then creates a 3D model by merging the additional photos, using the overlapping information to create depth and form.

The process of photogrammetry is divided into two main steps: image acquisition and image processing.

  1. Image Acquisition: This step involves taking multiple photographs of an object from different angles. A high-resolution camera and a tripod are often used to ensure that the images are clear and stable. The photos should also be taken under consistent lighting conditions to minimize shadows and distortions.
  2. Image Processing: This step is performed using specialized software, such as Agisoft Photoscan, Autodesk ReMake, or RealityCapture. The software uses algorithms to analyze and process the images, creating a 3D model by merging the different pictures. The software also generates a texture map, which can be used to add color and other details to the 3D model.

The resulting 3D model can then be exported as a file format that a 3D printer can read, such as STL, OBJ, or VRML. The model can be printed using any 3D printing technology or base models for class A modeling packages.

this process is not fully automated and requires manual adjustments, such as setting the correct parameters and adjusting the model after the initial processing. But with the advancements in AI and deep learning, this process is becoming increasingly automated, where AI-based algorithms can generate 3D models directly from images with minimal human intervention.

No alt text provided for this image

Furthermore, it is also essential to consider the ethical implications of AI-generated 3D models, such as potential copyright infringement, as AI models can borrow heavily from the training data.

In conclusion, AI-generated 3D models have the potential to revolutionize various industries and make the process of creating 3D models more efficient and cost-effective. However, it’s essential to consider the limitations and ethical implications of this technology.