Google Colab vs. RTX3060Ti - Is a Dedicated GPU Better for Deep Learning? | Better Data Science (2024)

NVIDIA RTX3060Ti dedicated GPU vs. a completely free environment - Which is better for TensorFlow?

Is it worth buying a dedicated GPU for deep learning? I mean, you can do most of the lightweight tasks for free in Google Colab, so is dedicated hardware worth it when you’re starting out? That’s what we’ll answer today.

Today we’ll run two data science benchmarks using TensorFlow and compare a custom PC with Google Colab. We’ll ignore the obvious benefits of having a PC and focus only on the model training speed. The custom PC set me back around $1300, which isn’t too bad for the components packed inside.

Here’s a table summarizing hardware differences between the two:

Google Colab vs. RTX3060Ti - Is a Dedicated GPU Better for Deep Learning? | Better Data Science (1)

Custom PC has more RAM and a more recent CPU. Comparing GPUs is tricky. RTX is newer but packs less memory. Tesla has 4992 Cuda cores, while 3060Ti has 4864 - pretty comparable numbers. The Colab environment assigned to me is completely random. You’re likely to get a different one, so the benchmark results may vary.

Don’t feel like reading? Watch my video instead:

Google Colab vs. RTX3060Ti - Data Science Benchmark Setup

As for the dataset, I’ve used the Dogs vs. Cats dataset from Kaggle, which is licensed under the Creative Commons License. Long story short, you can use it for free.

Refer to the following article for detailed instructions on how to organize and preprocess it:

TensorFlow for Image Classification - Top 3 Prerequisites for Deep Learning Projects

We’ll do two tests today:

  1. TensorFlow with a custom model architecture - Uses two convolutional blocks described in my CNN article.
  2. TensorFlow with transfer learning - Uses VGG-16 pretrained network to classify images.

Let’s go over the code used in the tests.

Custom TensorFlow Model - The Code

I’ve split this test into two parts - a model with and without data augmentation. Use only a single pair of train_datagen and valid_datagen at a time:

import osimport warningsfrom datetime import datetimeos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'warnings.filterwarnings('ignore')import numpy as npimport tensorflow as tftf.random.set_seed(42)# COLAB ONLYfrom google.colab import drivedrive.mount('/content/drive')##################### 1. Data loading##################### USED ON A TEST WITHOUT DATA AUGMENTATIONtrain_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1/255.0)valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1/255.0)# USED ON A TEST WITH DATA AUGMENTATIONtrain_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1/255.0, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest')valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1/255.0)train_data = train_datagen.flow_from_directory( directory='data/train/', target_size=(224, 224), class_mode='categorical', batch_size=64, seed=42)valid_data = valid_datagen.flow_from_directory( directory='data/validation/', target_size=(224, 224), class_mode='categorical', batch_size=64, seed=42)##################### 2. Model####################model = tf.keras.Sequential([ tf.keras.layers.Conv2D(filters=32, kernel_size=(3, 3), input_shape=(224, 224, 3), activation='relu'), tf.keras.layers.MaxPool2D(pool_size=(2, 2), padding='same'), tf.keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu'), tf.keras.layers.MaxPool2D(pool_size=(2, 2), padding='same'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(2, activation='softmax')])model.compile( loss=tf.keras.losses.categorical_crossentropy, optimizer=tf.keras.optimizers.Adam(), metrics=[tf.keras.metrics.BinaryAccuracy(name='accuracy')])##################### 3. Training####################time_start = datetime.now()model.fit( train_data, validation_data=valid_data, epochs=5)time_end = datetime.now()print(f'Duration: {time_end - time_start}')

Let’s go over the transfer learning code next.

Transfer Learning TensorFlow Model - The Code

Much of the imports and data loading code is the same. Once again, use only a single pair of train_datagen and valid_datagen at a time:

import osimport warningsfrom datetime import datetimeos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'warnings.filterwarnings('ignore')import numpy as npimport tensorflow as tftf.random.set_seed(42)# COLAB ONLYfrom google.colab import drivedrive.mount('/content/drive')##################### 1. Data loading##################### USED ON A TEST WITHOUT DATA AUGMENTATIONtrain_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1/255.0)valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1/255.0)# USED ON A TEST WITH DATA AUGMENTATIONtrain_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1/255.0, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest')valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1/255.0)train_data = train_datagen.flow_from_directory( directory='data/train/', target_size=(224, 224), class_mode='categorical', batch_size=64, seed=42)valid_data = valid_datagen.flow_from_directory( directory='data/validation/', target_size=(224, 224), class_mode='categorical', batch_size=64, seed=42)##################### 2. Base model####################vgg_base_model = tf.keras.applications.vgg16.VGG16( include_top=False,  input_shape=(224, 224, 3),  weights='imagenet')for layer in vgg_base_model.layers: layer.trainable = False ##################### 3. Custom layers####################x = tf.keras.layers.Flatten()(vgg_base_model.layers[-1].output)x = tf.keras.layers.Dense(128, activation='relu')(x)out = tf.keras.layers.Dense(2, activation='softmax')(x)vgg_model = tf.keras.models.Model( inputs=vgg_base_model.inputs, outputs=out)vgg_model.compile( loss=tf.keras.losses.categorical_crossentropy, optimizer=tf.keras.optimizers.Adam(), metrics=[tf.keras.metrics.BinaryAccuracy(name='accuracy')])##################### 4. Training####################time_start = datetime.now()vgg_model.fit( train_data, validation_data=valid_data, epochs=5)time_end = datetime.now()print(f'Duration: {time_end - time_start}')

Finally, let’s see the results of the benchmarks.

Google Colab vs. RTX3060Ti - Data Science Benchmark Results

We’ll now compare the average training time per epoch for both a custom PC with RTX3060Ti and Google Colab on a custom model architecture. Keep in mind that two models were trained, one with and one without data augmentation:

Google Colab vs. RTX3060Ti - Is a Dedicated GPU Better for Deep Learning? | Better Data Science (2)

Not even close. RTX3060Ti dedicated GPU is almost 4 times faster on a non-augmented image dataset and around 2 times faster on the augmented set. This performance difference is expected, as you’re paying for the hardware.

But who writes CNN models from scratch these days? Transfer learning is always recommended if you have limited data and your images aren’t highly specialized. Here are the results for the transfer learning models:

Google Colab vs. RTX3060Ti - Is a Dedicated GPU Better for Deep Learning? | Better Data Science (3)

We’re looking at similar performance differences as before. RTX 3060Ti is 4 times faster than Tesla K80 running on Google Colab for a non-augmented set, and around 2.4 times faster on the augmented one.

You now know the numbers, but are these alone enough to make an informed purchase decision? It depends, and I’ll elaborate on why next.

Conclusion

If you’re working in tech, spending $1300 on a PC probably won’t be a huge hit to your wallet. That doesn’t mean it’s a valid purchase decision. A single PC isn’t scalable. What happens when RTX3060Ti isn’t enough? You have to buy an additional GPU, which is expensive and hard to come by these days.

Cloud provides a more scalable solution. You rent the GPU as you need, and scaling up/down is ridiculously easy.

That being said, a PC comes with more benefits. You can use it for other tasks, such as office work, gaming, and anything else. Cloud GPUs are only good for computing. That’s something you should think about.

To summarize, even a mid-range GPU dramatically outperforms the free Google Colab environment. Keep in mind that I was assigned with Tesla K80 12 GB, which might not be the case for you. Your benchmark results may vary.

What are your thoughts on a cloud vs. on-premise solution for deep learning? What was the tipping point in your career when the cloud became more viable? Let me know in the comment section below.

Learn More

  • Benchmark: MacBook M1 13" vs. M1 Pro 16"
  • Benchmark: MacBook M1 Pro 16" vs. Google Colab
  • Benchmark: MacBook M1 Pro 16" vs RTX3060Ti

Stay connected

Google Colab vs. RTX3060Ti - Is a Dedicated GPU Better for Deep Learning? | Better Data Science (2024)
Top Articles
Key Insights into TRC-20 Token Development
Owner Financing: What It Is and How It Works | Bankrate
English Bulldog Puppies For Sale Under 1000 In Florida
Katie Pavlich Bikini Photos
Gamevault Agent
Pieology Nutrition Calculator Mobile
Hocus Pocus Showtimes Near Harkins Theatres Yuma Palms 14
Hendersonville (Tennessee) – Travel guide at Wikivoyage
Compare the Samsung Galaxy S24 - 256GB - Cobalt Violet vs Apple iPhone 16 Pro - 128GB - Desert Titanium | AT&T
Vardis Olive Garden (Georgioupolis, Kreta) ✈️ inkl. Flug buchen
Craigslist Dog Kennels For Sale
Things To Do In Atlanta Tomorrow Night
Non Sequitur
Crossword Nexus Solver
How To Cut Eelgrass Grounded
Pac Man Deviantart
Alexander Funeral Home Gallatin Obituaries
Energy Healing Conference Utah
Geometry Review Quiz 5 Answer Key
Hobby Stores Near Me Now
Icivics The Electoral Process Answer Key
Allybearloves
Bible Gateway passage: Revelation 3 - New Living Translation
Yisd Home Access Center
Home
Shadbase Get Out Of Jail
Gina Wilson Angle Addition Postulate
Celina Powell Lil Meech Video: A Controversial Encounter Shakes Social Media - Video Reddit Trend
Walmart Pharmacy Near Me Open
Marquette Gas Prices
A Christmas Horse - Alison Senxation
Ou Football Brainiacs
Access a Shared Resource | Computing for Arts + Sciences
Vera Bradley Factory Outlet Sunbury Products
Pixel Combat Unblocked
Movies - EPIC Theatres
Cvs Sport Physicals
Mercedes W204 Belt Diagram
Mia Malkova Bio, Net Worth, Age & More - Magzica
'Conan Exiles' 3.0 Guide: How To Unlock Spells And Sorcery
Teenbeautyfitness
Where Can I Cash A Huntington National Bank Check
Topos De Bolos Engraçados
Sand Castle Parents Guide
Gregory (Five Nights at Freddy's)
Grand Valley State University Library Hours
Holzer Athena Portal
Hello – Cornerstone Chapel
Stoughton Commuter Rail Schedule
Nfsd Web Portal
Selly Medaline
Latest Posts
Article information

Author: Kareem Mueller DO

Last Updated:

Views: 6156

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Kareem Mueller DO

Birthday: 1997-01-04

Address: Apt. 156 12935 Runolfsdottir Mission, Greenfort, MN 74384-6749

Phone: +16704982844747

Job: Corporate Administration Planner

Hobby: Mountain biking, Jewelry making, Stone skipping, Lacemaking, Knife making, Scrapbooking, Letterboxing

Introduction: My name is Kareem Mueller DO, I am a vivacious, super, thoughtful, excited, handsome, beautiful, combative person who loves writing and wants to share my knowledge and understanding with you.