How to Switch to Local MPS on Mac for PyTorch

You’ve probably heard about Metal Performance Shaders (MPS), especially if you’re working with PyTorch on a Mac with Apple Silicon (M1/M2). If you’ve ever wondered, “Can I use my Mac’s GPU for PyTorch instead of defaulting to CPU?” — the answer is yes, thanks to MPS. In this post, I’ll show you exactly how to switch PyTorch to use local MPS on your Mac, step-by-step. Whether you’re training a neural network or just running simple tensor operations, using the GPU can significantly speed up performance. Let’s get you running PyTorch on MPS like a pro.

What Is MPS and Why Use It on Mac?

Introduction to Metal Performance Shaders (MPS)

MPS is Apple’s framework for GPU-accelerated machine learning and deep learning on macOS. Think of it as Apple’s alternative to CUDA (which isn’t available on Macs).

PyTorch officially added MPS backend support starting with version 1.12, enabling users with M1 and M2 chips to tap into their native GPU hardware for training and inference.

Benefits of Using MPS vs CPU

  • Much faster training and inference compared to CPU
  • Optimized for Mac architecture (especially on M1/M2)
  • Great for students, researchers, and developers using MacBooks for ML

System Requirements

Before you try to switch to MPS, let’s make sure your setup supports it.

Supported macOS and Mac Hardware

RequirementMinimum
macOS VersionmacOS 12.3 (Monterey) or newer
ChipsetApple Silicon (M1, M1 Pro, M2, M2 Pro, etc.)
Intel MacsNot supported for MPS backend

Python and PyTorch Compatibility

ComponentVersion Required
Python3.8 to 3.11
PyTorch1.12+ (Latest recommended)
Install MethodConda or pip (no Docker support for MPS yet)

How to Check If Your Mac Supports MPS

1. Use Terminal to Check Chip Info

Open Terminal and run:

bash
uname -m

If the result is arm64, you’re on Apple Silicon. If it says x86_64, you’re on Intel (sorry—no MPS support).

You can also check your processor in About This Mac > Overview.

2. Confirm GPU and OS

Run:

bash

system_profiler SPHardwareDataType | grep "Chip"
sw_vers

Make sure you’re on macOS 12.3 or later and have an M1/M2 series chip.


Installing PyTorch with MPS Support

Here’s the good part—getting PyTorch installed correctly so you can actually use MPS.

1. Using pip (Recommended for most)

bash

pip install torch torchvision torchaudio

To install a specific version (e.g., 2.1.0):

bash
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0

MPS is now bundled into the official PyTorch build for Mac.

2. Using Conda (for environment isolation)

bash
conda install pytorch torchvision torchaudio -c pytorch

Check torch.__version__ After installation to confirm it’s the right version (1.12 or later).

Troubleshooting Installation Issues

Problem: MPS not available even after install?
Fix:

  • Make sure you’re not using an outdated Intel-based Mac
  • Try running with Python 3.10 or 3.11
  • Ensure you’re not inside a Docker container (MPS doesn’t work there yet)

Verifying MPS Availability in PyTorch

After installing, let’s make sure everything works as expected.

Run a Simple Python Check

Open a Python shell and run:

python
import torch

print(torch.backends.mps.is_available()) # True means MPS is ready to use
print(torch.backends.mps.is_built()) # Should also return True

If both return True, congratulations! 🎉 You can now use MPS in your PyTorch models.

What If mps Is it Not Available?

✅ Checklist:

  • Are you on macOS 12.3+?
  • Do you have an M1 or M2 chip?
  • Are you using PyTorch 1.12+?
  • Are you in a non-Docker Python environment?

If any of these conditions aren’t met, MPS will likely fail to load.

Switching Your PyTorch Model to MPS

Okay, let’s actually use the MPS device.

.to("mps") – The Key Command

PyTorch allows you to switch devices using .to() or .to(device).

python

device = torch.device("mps")
model = model.to(device)

The same goes for input tensors:

python

x = x.to(device)
y = y.to(device)

Full Example: Moving Model and Tensors to MPS

python
import torch
import torch.nn as nn

# Check MPS availability
device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")

# Dummy model
model = nn.Linear(10, 2).to(device)

# Dummy input
x = torch.randn(1, 10).to(device)
output = model(x)

print(output)

Important: Always move both the model and data to the same device.


Limitations of MPS Backend

As of 2024, MPS is still not as mature as CUDA, so some limitations exist.

Known Limitations

  • ❌ No multi-GPU support
  • ⚠️ Slower compared to CUDA for large models
  • ❌ Some ops (especially in older versions) are not implemented yet
  • ❌ No native FP16/AMP (Automatic Mixed Precision)

PyTorch continues to improve MPS support, but don’t expect lightning-fast speeds for heavy-duty models like GPT or ResNet-152.

Best Practices When Using MPS on Mac

1. Use Medium Batch Sizes

MPS performs better with moderate batches (e.g., 16–64). Extremely large batches may cause memory crashes.

2. Monitor Memory Usage

Use Activity Monitor to check GPU memory usage. PyTorch doesn’t yet have native tools to track MPS memory like nvidia-smi.

3. Stick With Default Precision

Until AMP is fully supported on MPS, it’s safest to stick with float32 precision for all models.

4. Use Mini Projects to Benchmark

Try:

  • Training MNIST or CIFAR-10
  • Running simple LSTM or Transformer modules
  • Transfer learning with MobileNet or ResNet-18

This helps you understand how your Mac’s GPU handles real workloads.

Common Errors and How to Fix Them

1. RuntimeError: MPS backend not available

Cause: Older macOS, wrong PyTorch version, or Intel Mac

Fix:

  • Upgrade macOS to 12.3 or newer
  • Upgrade to PyTorch 1.12+
  • Use Apple Silicon

2. Expected all tensors to be on the same device

Cause: You mixed CPU and MPS tensors

Fix:

python
x = x.to(device)
model = model.to(device)

Always make sure your model, data, and labels are all on the same device.

When to Use MPS vs CPU vs GPU

Use CaseRecommended Backend
Small ML projects on Mac✅ MPS
Large-scale deep learning❌ Use CUDA (Linux)
Mobile or embedded testing✅ MPS or CoreML
Maximum performance required❌ MPS not ideal

Final Thoughts

Switching to MPS on Mac for PyTorch is a game-changer for local development. You don’t need an NVIDIA GPU or a cloud instance just to train models—your Mac’s GPU can now do the heavy lifting.

While MPS isn’t perfect yet (especially for large-scale training), it’s rapidly improving, and for many developers, it’s more than enough for:

  • Model prototyping
  • Academic work
  • Running inference
  • Fast experimentation

So go ahead, fire up your Mac, and get building with MPS and PyTorch!

FAQs

1. Is MPS available for Intel Macs?

No, MPS is only supported on Apple Silicon (M1/M2) and macOS 12.3 or later.

2. Does PyTorch MPS support backpropagation?

Yes, but some complex operations may still be limited depending on your PyTorch version.

3. Can I train full models on MPS?

Yes, but large models may perform slowly or run into memory limits.

4. Is MPS slower than CUDA?

Yes—for now. CUDA is still the most optimized backend for deep learning. MPS is catching up, especially for Apple hardware.

5. Will Apple improve MPS support in the future?

Absolutely. Both Apple and the PyTorch team are actively working on better MPS integration.

Abhinesh Rai
Author: Abhinesh Rai

Abhinesh Rai is an AI enthusiast who leverages the latest AI tools to enhance user experiences and drive growth. A thought leader in the field, he shares valuable insights and strategies for harnessing AI's potential across various industries.

Connect on LinkedIn

Share:

Facebook
Twitter
Pinterest
LinkedIn
On Key

Related Posts

Scroll to Top