Question
Asked 6th Feb, 2022

Is Core i7 10th Gen 16 GB/1 TB SSD 6 GB Graphics/NVIDIA GeForce RTX 3060 Ti 144 Hz is enough for Deep Learning and Computer Vision Task?

My budget is tight and work is computer vision task.

Most recent answer

Nitesh Jindal
Indian School of Business
I think you can upgrade RAM to 32GB and video memory of Nvidia GeForce RTX 3060 to 8 GB instead of 6GB. 6GB is the least considered VRAM for Cuda Compatibility, that's why 8 GB or higher VRAM is better. Also, you can opt for RTX 3070 with 8 GB VRAM configuration if you would like to.

All Answers (10)

Mohaiminul Islam
Østfold University College
Yes, I think you can do most of your projects. And sometimes you can also use an online platform to write code.
Prithwish Jana
Georgia Institute of Technology
It depends. In fact, it depends on so many things. The processor e.g. Core i7 10th Gen or some other variant is important for the CPU speed and its in-built cache speed as well as capacity and all. RAM size like 16GB and all make space for your heap size as well as stack size of the program. Heap size is important for capacity for dynamic memory allocation i.e. allocating memory on demand. Stack-size is important for your program to run, to have space for context switching, saving the context, your local variables etc on the stack when nesting into methods or functions while the program is running. DL models and CV tasks dealing with images and videos require lot of physical space i.e your HDD or virtual memory to store. All these are important to sync up, to work together and algorithms can utilize them if implemented well in an optimized way.
You may refer to our article to get an idea how one algorithm implementation works with different types and capacities of resources.
1 Recommendation
It depends on how many data sets and iterations are used, but it's better if you increase the RAM used.
Ravindar Mogili
Jyothishmathi Institute of Technology and Science
This configuration is enough for most of the Deep Learning and Computer Vision tasks. If you're unable to perform any (Deep Learning / Computer Vision) task smoothly on your low configuration system, you can use online platforms such as Google Colab.
José Alberto Guzmán Torres
Universidad Michoacana de San Nicolás de Hidalgo
Dear Dipesh,
The configuration that you mentioned is enough for almost deep learning and computer vision projects. The most important component here is the GPU, because the GPU will do the hard work. RTX 3060ti is a modern GPU with a computer capability of 8.6
and that is enough for deep learning. Actually I have a GTX 1050 Ti, and I work satisfactorily with this GPU in all my deep learning models and computer vision projects.
Best regards
José Alberto
Aha...😋...Yes its not enough
.its the best.😂
...what is the gpu memory size...a dual gpu of above 8 gb raam would be match made in heaven
Just make sure to test the cuda version with your tensorflow & pytorch..with appropriate python version in this case it must be 3.8 ....Also, u goota upgrade tour old code if you wish to maximize the utility out of your gpus...
Ali Khalili
University of Tehran
It depends. First of all, and in my opinion of-course, the only thing that matters is GPU. So no matter how fast and powerful your CPU is, it is not going to affect your work. You only need a base system that won't bottleneck your GPU performance, which your current setup is way above that. One thing you can do if you are building a Desktop-PC is that to choose a RYZEN CPU and MOTHERBOARD. Also you should choose a CPU weaker than what you've chosen right-now because as I said before, you don't need that much power and weaker the CPU the Cheaper it is. And you can do this for every other part of your setup except you GPU. So I suggest you to study a little about this. Now for GPU, you have chosen a RTX 3060 TI which if I am not mistaken has a 8 GB of GDDR6 VRAM. In Deep Learning, specially Computer Vision, Our data is big most of the time and by increasing the complexity of our model, the VRAM required to store the necessary information to be processed increases respectively. Now 8 GB is the base minimum for a decent Computer Vision setup and will almost solve all the problems you have, sooner or later, but if you can up your budget a little and cut unnecessary expenses as I said earlier and acquire a GPU Setup with at-least 12 GB of VRAM(e.g. 1x 8GB + 1x 4GB), that could boost your work a lot. And the last thing is that the VRAM is not the only thing you should look for in GPU. You should also pay attention to the number of CUDA CORES the GPU has and also the work it was designed to do. So I would recommend you to buy a RTX Quadro Series GPU. They have more CUDA CORES and in my knowledge, they are designed for Computational Tasks which is exactly what you need.
Hope that Helps
M. Israk Ahmed
Memorial University of Newfoundland
Depends on dataset type and size as well the model complexity and ittaration. However, your configuration is well enough.
But for those who just wants to start, a basic computer with good internet connection is also enough, there is a well known platform where you can get gpu and tpu power for free named "Google Colab".
Nitesh Jindal
Indian School of Business
I think you can upgrade RAM to 32GB and video memory of Nvidia GeForce RTX 3060 to 8 GB instead of 6GB. 6GB is the least considered VRAM for Cuda Compatibility, that's why 8 GB or higher VRAM is better. Also, you can opt for RTX 3070 with 8 GB VRAM configuration if you would like to.

Similar questions and discussions

How to create a system for digital marking of texts, photos, graphics, videos, etc. made by artificial intelligence and not by humans?
Discussion
3 replies
  • Dariusz ProkopowiczDariusz Prokopowicz
How to create a system of digital, universal tagging of various kinds of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
How to create a system of digital, universal labelling of different types of works, texts, texts, photos, publications, graphics, videos, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
Two days earlier, in an earlier post, I started a discussion on the question of the necessity of improving the security of the development of artificial intelligence technology and asked the following questions: how should the system of institutional control of the development of advanced artificial intelligence models and algorithms be structured, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee? Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built? Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand? On the other hand, while continuing my reflections on the indispensability of improving the security of the development of artificial intelligence technology, analysing the potential risks of the dynamic and uncontrolled development of this technology, I hereby propose to continue my deliberations on this issue and invite you to participate in a discussion aimed at identifying the key determinants of building an institutional control system for the development of artificial intelligence, including the development of advanced models composed of algorithms similar or more advanced to the ChatGPT 4.0 system developed by the OpenAI company and available on the Internet. It is necessary to normatively regulate a number of issues related to artificial intelligence, both the issue of developing advanced models composed of algorithms that form artificial intelligence systems; posting these technological solutions in open access on the Internet; enabling these systems to carry out the process of self-improvement through automated learning of new content, knowledge, information, abilities, etc.; building an institutional system of control over the development of artificial intelligence technology and current and future applications of this technology in various fields of activity of people, companies, enterprises, institutions, etc. operating in different sectors of the economy. Recently, realistic-looking photos of well-known, highly recognisable people, including politicians, presidents of states in unusual situations, which were created by artificial intelligence, have appeared on the Internet on online social media sites. What has already appeared on the Internet as a kind of 'free creativity' of artificial intelligence, creativity both in terms of the creation of 'fictitious facts' in descriptions of events that never happened, in descriptions created as an answer to a question posed for the ChatGPT system, and in terms of photographs of 'fictitious events', already indicates the potentially enormous scale of disinformation currently developing on the Internet, and this is thanks to the artificial intelligence systems whose products of 'free creativity' find their way onto the Internet. With the help of artificial intelligence, in addition to texts containing descriptions of 'fictitious facts', photographs depicting 'fictitious events', it is also possible to create films depicting 'fictitious events' in cinematic terms. All of these creations of 'free creation' by artificial intelligence can be posted on social media and, in the formula of viral marketing, can spread rapidly on the Internet and can thus be a source of serious disinformation realised potentially on a large scale. Dangerous opportunities have therefore arisen for the use of technology to generate disinformation about, for example, a competitor company, enterprise, institution, organisation or individual. Within the framework of building an institutional control system for the development of artificial intelligence technology, it is necessary to take into account the issue of creating a digital, universal marking system for the various types of works, texts, photos, publications, graphics, films, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification ..., should be different for what is the product of artificial intelligence. It is therefore necessary to create a system of digital, universal labelling of the various types of works, texts, photos, publications, graphics, videos, etc., made by artificial intelligence and not by humans. The only issue for discussion is therefore how this should be done.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to create a system for the digital, universal marking of different types of works, texts, photos, publications, graphics, videos, innovations, patents, etc. made by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
How to create a system of digital, universal labelling of different types of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz

Related Publications

Chapter
The duty of detecting pavement cracks is crucial for ensuring traffic safety. The process of manually detecting cracks takes a long time. To accelerate this process, an autonomous road fracture identification technology is necessary. However, because of the intensity inhomogeneity of cracks and the intricacy of the background, such as poor contrast...
Conference Paper
Object detection and recognition is an integral part of Computer Vision. It has a multitude of applications ranging from character recognition to video analysis. Object detection now play a crucial role in industries like security, video, medical, sports, and many more. With the latest research, rapid development in deep learning, and computational...
Article
Full-text available
Object detection plays an important role in many computer vision applications. Innovative object detection methods based on deep learning such as Faster R-CNN, YOLO, and SSD have achieved state-of the-art results in terms of detection accuracy. There have been few studies to date on object detection with the addition of new classes, however, though...
Got a technical question?
Get high-quality answers from experts.