READ MORE: Deep Science: AI cuts, flows and goes green (TechCrunch)
The carbon cost of crunching the numbers using artificial intelligence and machine learning can be reduced if the most efficient methods are used.
That’s according to a new study co-authored by Google AI lead Jeff Dean as reported by TechCrunch. While some research has found that training a large model can generate carbon dioxide emissions equivalent to that of a small neighborhood, the Google-affiliated study contends that “following best practices” can reduce machine learning carbon emissions up to 1000x.
The practices in question concern the types of models used, the machines used to train models, “mechanization” (e.g., computing in the cloud versus on local computers) and “map” (picking data center locations with the cleanest energy).
According to Dean and his co-authors, selecting “efficient” models alone can reduce computation by factors of five to 10, while using processors optimized for ML training, such as GPUs, can improve the performance-per-Watt ratio by factors of two to five.
(Google, however, has skin in the game, and many of the company’s products, from Google Maps to Google Search, rely on models that required large amounts of energy to develop and run.)
Mike Cook, a member of the Knives and Paintbrushes open research group, tells TechCrunch that — even if the study’s estimates are accurate — there simply isn’t a good reason for a company not to scale up in an energy-inefficient way if it benefits them. While academic groups might pay attention to metrics like carbon impact, companies aren’t as incentivized in the same way — at least currently.
“The whole reason we’re having this conversation to begin with is that companies like Google and OpenAI had effectively infinite funding, and chose to leverage it to build models at any cost, because they knew it gave them an advantage,” Cook said. “Overall, I think it’s great if we’re thinking about efficiency, but the issue isn’t a technical one in my opinion — we know for a fact that these companies will go big when they need to, they won’t restrain themselves, so saying this is now solved forever just feels like an empty line.”
TechCrunch also has a report of another instance where AI can be turned to sustainability goals. German research institute Fraunhofer has developed a machine learning model for identifying parts of defunct machines so they can be put to use instead of heading to the scrap yard.
READ MORE: Turning old into new: A second life for vehicle components (Fraunhofer)
A huge number of used parts end up in the scrap yard for recycling every year. It is far more resource-efficient, however, to remanufacture alternators, starters and the like as part of a recirculation approach. This reduces waste, lowers the CO2 footprint and extends the service life of products. Fraunhofer is developing an AI-based assistance system for semi-automated image-based identification of used parts without QR or bar codes. This will assist the worker with the sorting process so that more used components can be sent for remanufacturing.
This system scans the packaging to gather information about the product group. By breaking this process down into subtasks, the search range for identification is reduced from 1:120,000 to 1:5,000. The used part is then weighed and recorded by stereo 3D cameras. The results obtained from the image-based processing stage are combined with the analysis of the part-specific commercial data, such as the origin, date and location, in order to identify the part reliably. The information is processed by two AI systems simultaneously.
The results of the image-based processing stage are merged with the analysis of the part-specific commercial data, such as the origin, date and location, so that the used part is identified in a reliable and comprehensive manner.
EXPLORING ARTIFICIAL INTELLIGENCE:
With nearly half of all media and media tech companies incorporating Artificial Intelligence into their operations or product lines, AI and machine learning tools are rapidly transforming content creation, delivery and consumption. Find out what you need to know with these essential insights curated from the NAB Amplify archives:
- This Will Be Your 2032: Quantum Sensors, AI With Feeling, and Life Beyond Glass
- Learn How Data, AI and Automation Will Shape Your Future
- Where Are We With AI and ML in M&E?
- How Creativity and Data Are a Match Made in Hollywood/Heaven
- How to Process the Difference Between AI and Machine Learning
“One AI system was trained for image processing, which was our task for the project, and the other one was trained for commercial data. We use convolutional neural networks for the image processing AI method. These are algorithms from the field of machine learning that specialize in extracting features from image data,” explains Fraunhofer. The outcome of the identification process is shown to the employee, who receives a suggestion list with a preview image and part number, thus retaining full control.
“The AI is incorporated into the ongoing operation and the work process is not disrupted. The worker has no extra tasks to perform, which is extremely important in this time-sensitive process. Our AI system runs on conventional desktop PCs. All of the company’s sites can be networked via the cloud, meaning that the practical knowledge of one employee can benefit workers at other sites.” The technology can be used for all types of dimensionally stable components.
A study conducted as part of the project revealed a remarkable recognition accuracy of 98.9%, which could feed many more “used” parts back into the cycle than before.
Another area few people think about is the large number of machine parts and components that are produced by various industries at an enormous clip. Some can be reused, some recycled, while others must be disposed of responsibly — but there are too many for human specialists to go through.
We’ve seen machine learning models taking on big data sets in biotech and finance, but researchers at ETH Zurich and LMU Munich are applying similar techniques to the data generated by international development aid projects such as disaster relief and housing. The team trained its model on millions of projects (amounting to $2.8 trillion in funding) from the last 20 years, an enormous dataset that is too complex to be manually analyzed in detail.
“You can think of the process as an attempt to read an entire library and sort similar books into topic-specific shelves. Our algorithm takes into account 200 different dimensions to determine how similar these 3.2 million projects are to each other — an impossible workload for a human being,” said study author Malte Toetzke.
Very top-level trends suggest that spending on inclusion and diversity has increased, while climate spending has, surprisingly, decreased in the last few years. You can examine the dataset and trends they analyzed here.