A new artificial intelligence-powered tool called Style2Fab has made it easier to customize models of 3D-printable objects, including things like assistive devices, without impeding how they function.
In recent times, the average cost of 3D printing devices has dramatically come down, and more people than ever across the globe have access to this technology. It has led to an ever-expanding community of beginners designing and bringing their unique ideas to life.
It has all been made possible for amateurs because they can now freely access open-source repositories of user-generated 3D models without having to spend any money. People can download and manufacture objects using their own 3D printing devices.
However, adding personalized design components to these models also creates an obstacle for many novice artisans, mainly because costly and high-end CAD (computer-aided design) software is required by the user/fabricator. Producing something becomes even more difficult if the original depiction of the 3D printable model is unavailable on the internet.
Even if a fabricator could add customized components to something they wanted to print, ensuring those new elements don't affect how the object properly functions would require a more advanced understanding of the field that most novices do not possess.
To help novices tackle these obstacles, pioneering researchers at the Massachusetts Institute of Technology (MIT) have developed a generative artificial intelligence-powered tool, enabling users like this to add personalized features without negatively affecting the functionality of their 3D printed objects. It's at this point where Style2Fab comes into play.
This innovative new MIT-built tool can be used to customize objects by using only natural language prompts to communicate precisely which elements a user wants to be included in their design. Any designs they come up with could then easily be fabricated by the 3D printer, and its functionality would not be compromised.
The main problem faced by novices with little or no experience is that once they have successfully downloaded a model, if they decide they want to personalize it, most get stuck at this point and have no idea what they should do. Style2Fab would simplify the user's ability to customize and print the 3D model with relative ease, enabling them to try out new things.
The innovative technology is powered by AI-powered deep-learning algorithms that automatically separate the model into visually appealing and practical segments so it can function properly. At the same time, it would make the entire design and fabrication process more streamlined.
On top of giving beginners more confidence with this newfound accessibility, Style2Fab can also be applied to the promising field of medical making. According to research, good functionality and design are the two main factors that tend to sway a patient's decision to invest in an assistive device. However, most clinicians and patients are extremely unlikely to have the kind of know-how to personalize 3D-printable models on their own.
With Style2Fab, for example, a user could personalize the look of a thumb splint to the point that it blends in with what they're wearing without impacting the functionality of that wearable device, and it will remain comfortable. One of the biggest motivations for companies like this to design assistive devices in a rapidly expanding field was to develop something more user-friendly.
Functionality is a key focus
One of the most notable repositories providing primarily free, open-source hardware is Thingiverse. Their company enables individuals to upload open-source, user-fabricated digital design files of objects that anyone can download and produce using a 3D printer.
A computer science graduate student and the lead author of the paper that introduced Style2Fab to the world, Faraz Faruqi and a team of like-minded collaborators initiated the project by taking a closer look at the objects widely available today in these vast online repositories. Their goal was to improve their knowledge of the functionalities that currently exist within a broad range of 3D models.
Studying these objects would give them a clearer understanding of how best to implement AI to segment 3D printable objects (the models) into visually appealing and practical components.
AI is currently unable to determine the function of a 3D model, so it is imperative that a human still has some decision-making when factoring in using artificial intelligence. With that in mind, two functionalities were distinguished: internal and external. Internal functionality involves parts of the model that must complement each other after the model has been fabricated, and external functionality involves parts of the model that interacts with the external world.
A tool that could customize a model must be able to maintain the shape and size of both the internally and externally functional components and allow for the personalization of nonfunctional, aesthetic components.
To make this possible, Style2Fab needs to learn which elements of a 3D model are functional. Thanks to ML (machine learning) systems, each model's topology can be analyzed to monitor the regularity of shifts in geometry, such as angles or curves where two planes merge. With this as a basis, the model can be precisely split into specific segments.
At this point, Style2Fab takes those segments. It compares them with a dataset the researchers gathered, which consists of almost 300 3D object models, with each model's segments interpreted with the appropriate aesthetic or functional label. A segment would then be marked as functional if it matches up with any of those pieces.
However, when using only geometry as a factor, it becomes more difficult to classify individual segments, mainly due to the sheer volume of variations in previously shared models. Therefore, they start as a set of recommendations for the user to view, who can then easily alter a segment's classification to functional or aesthetic.
Humans still needed
Upon accepting the segmentation, the user can then input a prompt with natural language, outlining the design elements they wish to include, such as a smartphone case in the style of ancient Roman art or a smooth white vase. After entering the information, an AI-powered tool called Text2Mesh will then attempt to ascertain what a 3D model may look like based on the criteria described by the user.
The innovative AI system then engineers the aesthetic segments of the model using Style2Fab, adjusting geometry and adding color and texture, where needed, to make the 3D printed version appear as accurate as possible based on the original design. However, the functional segments remain out of reach.
The researchers incorporate each component into the back-end of the user's interface, which then automatically segments and personalizes a model, thanks to just a few taps or clicks and detailed suggestions input by the user.
A study involving numerous fabricators with a diverse range of 3D modeling abilities found Style2Fab to be a practical tool in various ways based on the maker's experience level. Beginners were also able to comprehend the interface to personalize their models, plus it also presented a fruitful foundation for experimentation with a low barrier to entry.
In other words, Style2Fab users who were now well-seasoned in using the system generally became far more efficient in their workflows. Thanks to some of the system's advanced options, they also had more detailed control over personalization.
As we look ahead, the plans are for Faraz Faruqi and his collaborators to improve the system regarding geometry and physical properties. For example, changing the object's shape could greatly impact the precise amount of force it can handle, which could cause the object to weaken or completely fail.
Additionally, the researchers want to improve their systems to enable users to design and fabricate their own personalized 3D models with relative ease from start to finish. They are also working closely with Google on an exciting new project that's being described as the follow-up to Style2Fab.