Self-hosting | Security | Sysadmin | Homelab | Programming

265 Members
Welcome to our chat channel dedicated to all things related to self-hosting, cyber security, system administration, and homelab discussions and support! Whether you're an experienced sysadmin, a cyber security enthusiast, or just starting out with homelabbing, you've come to the right place. Here, we'll discuss and share knowledge on a wide range of topics related to self-hosting your own services, securing your systems and networks, and managing your homelab setups. From configuring servers and networking equipment to setting up VPNs and firewalls, we'll cover it all. Feel free to ask any questions you may have, share your experiences, and exchange tips and tricks with other members of the community. Our goal is to create a friendly and supportive environment where everyone can learn and grow their skills. #self-hosting : Discussion about self hosted or in-house applications and services for private cloud and privacy preservation use cases. Inspired by /r/selfhosted community on Reddit (no official affiliation.) Self Hosted Software Lists: https://tinyurl.com/awesome-self-hosted https://tinyurl.com/awesome-rank-self-hosted https://github.com/kahun/awesome-sysadmin How to secure your self hosted services: https://tinyurl.com/securing-selfhosted https://github.com/sbilly/awesome-security General Self Hosting Tutorials: https://landchad.net/ Infosec Links: https://github.com/jivoi/awesome-osint https://github.com/sbilly/awesome-security https://github.com/qazbnm456/awesome-web-security https://github.com/Hack-with-Github/Awesome-Hacking https://github.com/hslatman/awesome-threat-intelligence https://github.com/decalage2/awesome-security-hardening Rules: 1 - be awesome and have fun :) 2 - Please do me a huge favor and please don't create threads. I have a specific use case for threads for this channel and I'd like to keep it organized and clean. Chat threads will be removed. Thank you for your cooperation. 68 Servers

Load older messages

20 Feb 2024
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿https://github.com/maubot/maubot I found this. A matrix bot with plugins, already exists. 08:10:13
21 Feb 2024
@iamnotafly:matrix.org@iamnotafly:matrix.org left the room.07:18:27
22 Feb 2024
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿Too complicated, that bot setup. Mine is minimal code, does what I need. I'll keep my bot.04:25:04
@calico:cat.casaTricolored Fluffball changed their display name from Tricolored Ball of Fluff to Tricolored Fluffball.13:51:49
@k1nk0z:subr0sa.0j0.jpgeorge.roswell changed their display name from k1nk0z to george.roswell.23:49:29
24 Feb 2024
@krazykirby99999:matrix.orgkrazykirby99999 joined the room.21:50:54
In reply to @hashborgir:mozilla.org
TheBloke_Mistral-7B-Instruct-v0.2-GPTQ is the model being used. People were saying online how it's the best one yet.
the best for which purpose? gguf is obsoletes gptq, and specific finetunes or merges of mistral-7b are better for specific tasks
@krazykirby99999:matrix.orgkrazykirby99999For a chatbot, try openhermes-2.5-mistral-7b as gguf21:54:28
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿Ok thanks. Welcome btw and thanks for the lib22:00:23
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿People said mistral/mixtral are the best models these days in various reviews online22:01:59
@krazykirby99999:matrix.orgkrazykirby99999mixtral is better, but has higher resource requirements 22:10:50
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿I have a 3060 12GB. I can do it with 12 layers and CPU offloading but I get 1 token per second if that22:11:23
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿With Mistal Instruct 7b 0.2 I get up to 62 tokens/s inferencd22:11:45
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿 * With Mistal Instruct 7b 0.2 I get up to 62 tokens/s inference22:11:47
@krazykirby99999:matrix.orgkrazykirby99999Mixtral would be higher quality, but much slower22:12:19
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿 I have the TheBloke_CapybaraHermes-2.5-Mistral-7B-GPTQ right now. I'll look for the open hermes one. What format is best? 22:12:52
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿Ok22:13:36
@krazykirby99999:matrix.orgkrazykirby99999it's a direct upgrade to gptq22:13:44
@krazykirby99999:matrix.orgkrazykirby99999chechout the locallama subreddit22:14:01
@krazykirby99999:matrix.orgkrazykirby99999* checkout the locallama subreddit22:14:13
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿Oh ok, I misread probably. I'm on medications these days, had spinal surgery a few weeks ago22:14:22
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF is made from https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B which is a preference tuned OpenHermes-2.5-Mistral-7B. 22:18:03
@krazykirby99999:matrix.orgkrazykirby99999that's likely very similar22:31:02
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿Screenshot_20240224_153558.png
Download Screenshot_20240224_153558.png
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿I have all the Kirby games bookmarked in Retroarch 😄22:36:15
25 Feb 2024
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿I tested and talked with many people on thebloke's discord server as well, and I get the best performance with GPTQ loaded with Exllama2. Everything else is slower. GGUF is a replacement/upgrade for GGML. GPTQ is GPU only, and GGUF is for GPU+CPU offloading using llama.cpp and that's also slower. 09:34:42
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿Anyone know a self hosted github copilot alternative that works with openai api? So if I have ooba text-generation-webui backend loaded, it can work with just the api? I see a lot of them require you to use their own engine12:31:09
@hashborgir:mozilla.org🍄 HB|B.CS,BSCSIA,A+,Net+,Sec+,CySA+,Pentest+,Project+,ECES,ITILv4,SSCP,MCP,MCSE|🌿https://github.com/FarisHijazi/localCopilot I tried this and Fuaxpilot too, I couldn't get either working like I wanted12:31:47

There are no newer messages yet.

Back to Room ListRoom Version: 10