!aBImGsZqzKUqCwIRRy:matrix.org

Data Science & Machine Learning

911 Members
For data science related discussion15 Servers

Load older messages


SenderMessageTime
20 Mar 2023
@_discord_100584466472247296:t2bot.ioh4pZ Yeah, more or less. However, that’s if you wanna do like the whole diffusion, huge LLM, VLMs stuff. If you just want to do the “””normal””” stuff like object classification/image classification, you should be fine 17:58:12
@_discord_100584466472247296:t2bot.ioh4pZ Just anecdotal experience of my trying to run a huge diffusion model with my 4080 with FP32 hehe 17:58:34
@_discord_100584466472247296:t2bot.ioh4pZ * Just anecdotal evidenceof my trying to run a huge diffusion model with my 4080 with FP32 hehe 17:58:46
@_discord_100584466472247296:t2bot.ioh4pZ * Just anecdotal evidence of my trying to run a huge diffusion model with my 4080 with FP32 hehe 17:58:51
@_discord_100584466472247296:t2bot.ioh4pZ * Just anecdotal evidence of me trying to run a huge diffusion model with my 4080 with FP32 hehe 17:59:06
@_discord_760839614595858433:t2bot.ioAPCodes#2552 yeah I guess I'll go for the 3060 and try to get one for 250 Euros. I can potentially resell or reuse it and get a used 3090 in a year or so if I feel the need for that. 17:59:16
@_discord_100584466472247296:t2bot.ioh4pZ That would be a good idea 24GB of vram can make the difference 18:00:12
@_discord_760839614595858433:t2bot.ioAPCodes#2552 is it possible to easily use two 3060 together? That would also give me 24 GB of VRAM 18:00:53
@_discord_760839614595858433:t2bot.ioAPCodes#2552 and would be cheaper I suppose 18:01:25
@_discord_100584466472247296:t2bot.ioh4pZ Haven’t really tried it tbh 18:01:59
@_discord_100584466472247296:t2bot.ioh4pZ I know there are a lot of tools that let’s you distribute training on multiple gpus 18:03:18
@_discord_100584466472247296:t2bot.ioh4pZ Via data parallelism 18:03:23
@_discord_760839614595858433:t2bot.ioAPCodes#2552 okay interesting, I though as much. In that case I assume the stuff might also run in two different machines, yeah? 18:03:23
@_discord_760839614595858433:t2bot.ioAPCodes#2552 like it would be the case in many cloud applications anyways 18:03:26
@_discord_760839614595858433:t2bot.ioAPCodes#2552 yeah: "DistributedDataParallel (DDP) implements data parallelism at the module level which can run across multiple machines." 18:06:07
@_discord_760839614595858433:t2bot.ioAPCodes#2552 I just answered that myself I guess 😄 18:06:14
@_discord_100584466472247296:t2bot.ioh4pZ Yes 18:09:19
@_discord_100584466472247296:t2bot.ioh4pZ Yes you did :) 18:09:33
@_discord_760839614595858433:t2bot.ioAPCodes#2552 yeah that is great, should I really start to like Deep Learning. I will be building a NAS/VM host once the AMD Ryzen 5700G gets a little cheaper. And then I could put 3060s in there! 😄 18:12:19
@_discord_760839614595858433:t2bot.ioAPCodes#2552 anyways, thanks 😅 18:13:28
@_discord_100584466472247296:t2bot.ioh4pZ Send pics of the NAS (if you want to) once it’s done ;) 18:14:53
@_discord_100584466472247296:t2bot.ioh4pZ Np 18:14:55
@_discord_760839614595858433:t2bot.ioAPCodes#2552 yeah I will, but it might be a few months. The 5700G is still a bit too expensive for my taste. But I have a AM4 mainboard which is not in use, and a power supply, and that would be a perfect system for something like that... 18:16:49
@_discord_760839614595858433:t2bot.ioAPCodes#2552 and a case too 18:16:58
@_discord_760839614595858433:t2bot.ioAPCodes#2552 and SSDs. I actually only need the CPU and RAM 18:17:15
@_discord_310835408961536000:t2bot.ioKorven@NotGuix lmao is that a shot 🤣 18:21:26
@_discord_613066600127528969:t2bot.iohash#5127 Stop giving ideas to them 18:29:47
@_discord_613066600127528969:t2bot.iohash#5127 They’re listening 👂 18:29:49
@_discord_100584466472247296:t2bot.ioh4pZ They are https://twitter.com/nearcyan/status/1622816211003142144 19:59:50
@_discord_974962507859001444:t2bot.ioikako joined the room.20:06:45

There are no newer messages yet.


Back to Room ListRoom Version: 9