I have some code that gets the list with a couple thousand URLS, each of which he goes and gets the right info, synchronous this code was working 5 minutes probably, and when I copied the address to each URL on goroutines and channels make the code to handle all this for 20 seconds. The profit is obvious to me.
But I got a rebuke from another person, in his opinion, the launch of such number of goroutines is a bad decision, explaining it by the fact that the number of threads depends on the number of CPU cores and Violetta each goroutine stack is equal to the count of cores, and the other type in the memory hang, spending her.
But I find it hard to believe about the number of cores and large memory overhead.
Can something about this to say?
My code by the way - https://github.com/d0kur0/webm-grabber