You are now following this question
- You will see updates in your followed content feed.
- You may receive emails, depending on your communication preferences.
Is R2021a configured to automatically use Apple M1 chip for GPU and Neural Engine?
5 views (last 30 days)
Show older comments
The Apple M1 chip describes the following specs of its chip:
- 8-core CPU with 4 performance cores and 4 efficiency cores
- 7-core GPU/8-core GPU
- 16-core Neural Engine
Will the deep learning toolbox automatically use Apple M1 chip for GPU and Neural Engine?
Accepted Answer
Joss Knight
on 4 Jun 2021
No. MATLAB only supports NVIDIA GPUs for computation.
23 Comments
Joss Knight
on 25 Jun 2021
Can you elaborate on what you mean by 'updated'. MATLAB works on the Apple ARM CPU and it draws graphics using the GPU. What it doesn't do is support the M1 integrated GPU as a GPU compute device - it doesn't do this for any GPUs other than NVIDIA ones. Are you asking if MATLAB will support the GPU for computation - for creating gpuArray objects?
Marko
on 29 Jun 2021
Hello Joss,
are there any plans for supporting gpuArrays on Apple's M1?
Best regards
MJ
Joss Knight
on 29 Jun 2021
It's rare that we'll risk talking about the roadmap for MATLAB features in MATLAB Answers - not only is this privileged information, it can change very suddenly with events and then we end up looking like we're breaking promises.
I can however tell you that there will be no support for gpuArray objects on any Mac in the next 2021 release of MATLAB.
MathWorks regularly reviews its policies on GPU support. Where there is a market, a robust, stable, performance portable language and runtime, and an extensive, supported, multi-platform ecosystem of tools, there is a good incentive to expand. We will be watching all these factors as use of the Apple M1 evolves.
It does interest me what your use case might be. To my knowledge, the M1 is available only on laptops for now. Laptops are superb for prototyping, but are generally not designed for high performance computing or power hungry computation. We've usually found that people can do most of what they need on their laptops with ordinary arrays, and convert to gpuArray when they're ready to deploy into a higher performance, higher power environment. Do you find yourself performing long-running computations on your laptop for which performance is an issue? I can imagine perhaps a realtime webcam-based object detection system, while someone running a machine learning training algorithm for many hours may be less common.
Marko
on 30 Jun 2021
Edited: Marko
on 30 Jun 2021
It is understandable for me that a critical number of users must be reached to expand a product in a certain direction.
As you know, the advantage of GPUs is that they can perform "simple" operations very fast and highly parallelizable.
In the field of fluid mechanics, for example, the smoothed particle method (SPH) is very well suited to work on GPUs. With this method it is possible to run a real-time simulation.
According to my research, it is currently only possible to address the GPU via Apple's "Metal" API, if I am correct. And since Mathworks only supports certain nVidia GPUs, I assume that only the CUDA framework is used.
But I can also understand your reasons that if a maximum performance is required then CUDA based GPUs will be used.
e.g. Apple's M1 gpu has a performance of 2.6 teraflops (which is in consumer notebooks and iMacs) and the new $1,500 flagship card, the RTX 3090? 10,496 cores, for 36 teraflops.
So the new nVidia GPU is "only" 14x faster (in the theory).
Thank you for your detailed explanation
Patrick Hullett
on 12 Dec 2021
Edited: Patrick Hullett
on 18 Dec 2021
Hi Joss,
I do you find myself performing long-running computations on my laptop for which performance is an issue. I do acedemic research and I work in three physical locations so a desktop is not feasable. I am frequently writing new computationally intensive scripts for data analysis and these take much longer to code up on our cluster (debuging is a real issue). This limits my use of our cluster to only the most computaionally intensive tasks that are stable over time. There is also often reliabiliy issues with shared clusters. With my laptop, I know my work will never be on hold for a day or weekend due to cluster issues. If I could use the 32 GPUs on the M1 Max I would go buy it tomorrow.
Joss Knight
on 18 Dec 2021
Hi Patrick. I'm guessing you updated this comment with my name to encourage me to respond?! I wish I had some sort of positive answer for you. Despite your example, it is still uncommon for laptops to be used for long-running high-power-consumption computation. At the same time it is a huge commitment to support an entirely new type of hardware accelerator, mainly because MATLAB depends on 3rd party libraries for much of its GPU acceleration and there isn't one for everything...but also because every type of processor has its own characteristics and there is no one algorithm best for everything. All this amounts to needing a truly compelling reason to support new accelerators. But don't worry, we review this almost continuously, and we're well aware of the large Apple market. I would encourage you to use your CPU for now which is none too shabby. (Or get yourself a new laptop with an NVIDIA GPU of course, or an external GPU enclosure.)
By the way, the M1 Max has 32 GPU cores, not 32 GPUs...it's not quite as amazing as that!
Joss
Walter Roberson
on 18 Dec 2021
My 2012 MacBook Pro was the last model laptop Apple made with an Nvidia. The 2013 iMac was, iirc, the last desktop. Mac Pro did have official support after that point, but those were very expensive (and not laptops)
Joss Knight
on 18 Dec 2021
Edited: Joss Knight
on 18 Dec 2021
Apple is not, dare I say it, the only manufacturer of laptops.
Walter Roberson
on 19 Dec 2021
There were very few non-Apple laptops that you could put MacOS on ;-)
Joss Knight
on 19 Dec 2021
Edited: Joss Knight
on 19 Dec 2021
I think it's a fair assumption that Patrick, as a skilled technical programmer, is flexible enough to learn more than one operating system Walter... :)
Walter Roberson
on 20 Dec 2021
I spent 6 hours last night doing simple system administration on my mother's Window system. That was one of the easier nights; 14 or 20 hours is more common. Her system does not have unusual problems... or rather, the problems she encounters are not uncommon for Windows systems.
I have Windows installed in virtual machines on my Mac. My ratio of time spent administering the Windows systems, to the time spent using Windows, runs at least 5:1.
Joss Knight
on 20 Dec 2021
I sympathise Walter. Last time I was on a Mac I spent hours just working out how to open a file on the network.
But I think even the most intransigent developer would accept that it is really a matter of familiarity. I started out on Macs as a teenager and found the Unix systems I started out on at work baffling, until I realised just how powerfully configurable and inspectable they were. Forced to move to Windows, I resented the control being taken away until I realised that Windows has the best development tools of all the OSs. Forced then to move to Linux and its clunkier IDEs, I eventually realised that the Windows filesystem, a messy mishmash of decades-old and newer technologies, made cross-platform development almost impossible. But ultimately I know I could master anything given time, and I'm sure Patrick is the same. Like a move from iPhone to Android, as the advantages of one world are conceded, the wonders of another are exposed, and it really doesn't take that long.
Essentially he wants three things: 1. To use MATLAB, 2. To use the GPU and 3. To use MacOS. He must give up one of those things and I'm hardly going to advocate he give up number 1 (and I'm sure, given that would mean him rewriting absolutely everything, he wouldn't either).
Walter Roberson
on 20 Dec 2021
He must give up one of those things
Not if we can convince Mathworks to implement post-NVIDIA on Apple systems ;-)
But ultimately I know I could master anything given time
In university, as a Math / Computing major, in the second half of my second year, I took the combined Quantum Physics and Quantum Chemistry third-year course from the Engineering department -- a course I was told that Engineering students preferred to postpone to 4th year if they could.
That was the course where I discovered that some fields of knowledge are beyond my talents, that "given time" is not always enough.
But in any case, who is indeed "given time" these days? The more toolboxes I poke my head into to try to help someone, the more people want me to help them with even more fields of knowledge. I have a new iMac sitting on my desk that I haven't had time to switch over to in the 7 months it has been sitting there because I've been too busy answering questions...
Joss Knight
on 20 Dec 2021
It isn't just a matter of convincing MathWorks. You also need to convince Apple to provide support for a huge array of high performance computing software fundamentals. And then you need to convince your fellow Mac users to use their Macs as their primary system for performing long-running HPC calculations, like our friend Patrick here.
It'll be interesting to see how this evolves over the next few years - at the moment Apple doesn't seem particularly interested in this area (if you saw how Macs are supported in our Build-and-Test data center you'd understand), but that could change. The biggest challenge we seem to have with Apple is that they like proprietary hardware and they like the freedom to switch it out every few years. This makes it really hard to keep up. We'll keep doing our best though.
By the way, I think you were just being facetious but of course I'm not saying I can learn anything given time, just any operating system...I'm never going to truly understand advanced statistics, theology, or the value of Agile development practices no matter how much time you give me.
Joss Knight
on 20 Dec 2021
By the way the "getting more done just creates more to do" is part of a fundamental philosophy expounded by Oliver Burkeman in his book "Four Thousand Weeks". What he's saying is, maybe the urge to reach the bottom of the mailbag is futile and unfulfilling.
Patrick Hullett
on 21 Dec 2021
Edited: Patrick Hullett
on 21 Dec 2021
Hi Joss, thank you very much for replying to my comment. I mainly wanted to give my situation as an example so it helps raise awarenes that there are people that would really benifit from this. I agree, at one point in time it may not have seemed like a smart investment to make, and it may still not be, but I would say my situation is not uncommon in acedemia (at least the places I have been) and every year more of the reserch communinty in the biological sciences is moving toward more computationally oriented work. This is just a long way of saying it may be worth the investment. Happy holidays.
Walter Roberson
on 21 Dec 2021
To really use an RTX 3090 (earlier said to be $US1500) you also need at least a 750 watt power supply, and a lot of fast memory, and a decently fast CPU, and a motherboard, and a case, and cooling systems... that's a fair bit of investment.
Sometimes the goal is not to maximize GPU (with all attendent costs), but rather to do what you can with what you can afford, within an eco-system of tools relevant to what you do.
Joss Knight
on 21 Dec 2021
Well, it's been an interesting discussion and I'll certainly add this user story to the other cases for widening support for accelerators including Apple M1, AMD, Intel, TPU and others.
Peter Felfer
on 27 Jan 2022
I used to use the GPGPU a lot when it was available on my Mac, not because of long running siumulations, but because it makes my calculations (3D data reconstruction) that require user interaction interactive. This is not possible (still not) using the CPU. Maybe another very important aspect of having Mac GPGPU support.
More Answers (0)
See Also
Categories
Find more on Containers in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!An Error Occurred
Unable to complete the action because of changes made to the page. Reload the page to see its updated state.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom(English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)