• 1 Post
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • more specific to a subset of people who have time to bother

    And that subset of people needs to have at least some kind of mindset to learn the viable minimum skills to even start with and a will to learn more and more and more. I’ve done various kinds of hosting as a career for couple of decades and as things change I’m fighting myself if it’s worth my time and effort to keep my home services running or should I just throw money to google/apple/microsoft/whoever to store my stuff and manage my IOT stuff and throw the hardware into recycling bin.

    I have the skill set required for whatever my home network might need up to a point that I could somewhat easily host a small village from my home (money is of course a barrier after a certain point), but I find myself more and more often thinking if it’s worth the effort. My Z-wave setup needs some TLC as something isn’t playing nicely and it causes all kinds of problems with my automations, my wifi network could use a couple of sockets on the walls to work better, I should replace my NVR with something open source to include couple of more cameras around the yard and have better movement recognition and cameras should go to their own VLAN and so on.

    Most of that stuff is pretty basic to set up and configure (well, that z-wave network is a bit of it’s own thing to manage) and it would actually be pretty nice to have all the things working as they should and expand on what I have to make my everyday life even more simpler than it already is. But as there’s a ton of things going on in life I just rather spend few hours gaming from my sofa than tinker with something.

    That’s of course just me, if you get your reward and enjoyement on your network then good for you. Personally I think I’ll keep various things running around, but right now in this place I’m at, the self hosting, home network and automation and all that is more of a chore than a hobby. And I’m pretty sure I don’t like it.


  • I agree with you, nuclear response would make things very difficult with China and their allies, but there’s plenty of traditional firepower available directed to Russia if things escalate to that point and should Russia attack with nukes I don’t think they’ll have a lot of support for their actions from the east. And triggering nuclear response would likely end up in a MAD scenario which is something I think (and hope) no one really wants to see trough.

    But that still leaves a pretty big field to work with traditional ammunition and a skilled pilot from Sweden could still reach Moscow in 20 minutes or so to turn multiple military targets within the city into a rubble. And there’s plenty of airfields closer than Stockholm with equally capable fighter jets. For the ground force, Finns and Estonians could at least in theory reach Moscow in 10-12 hours since majority of troops defending it are already down on some field in Ukraine and our artillery forces move pretty damn fast.

    The amount of destruction Russia could cause is of course still an enormous humanitarian crisis, but even if they could turn Kiyv to wasteland (and kill millions while doing it), it still wouldn’t change the outcome of full Nato response without any bullshit politics limiting on actions if anyone is allowed to strike on the Russian soil.


  • Medvedev found keys for the booze cabinet again? They seem to happily forget the fact that Moscow is well within reach of multiple Nato countries by now. Obviously a ton of things need to change before anyone with a gun is standing on a red square, but Finland, Sweden, Estonia and Poland (among others) are quite capable of hitting the Kreml (in theory, and in practise if needed) with fighter jets in less than 30 minutes. Additionally their ports opening to gulf of Finland are in reach of both Finns and Estonians with traditional artillely, and at least we in Finland are pretty capable and accurate with our hardware.

    So, even if they find some old soviet relic still functional, Nato has multiple options to level multiple cities at Russia before their missile hits the ground. Nuclear attack against Ukraine would of course be a humongous tragedy with terrible price on civil casualties, but I’m pretty confident that it would be the last thing the Russia we currently know would do as a country.


  • IsoKiero@sopuli.xyztoSelfhosted@lemmy.worldDNS?
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    As far as I know it is the default way of handling multiple DNS servers. I’d guess that at least some of the firmware running around treats them as primary/secondary, but based on my (limited) understanding at least majority of linux/bsd based software uses one or the other more or less randomly without any preference. So, it’s not always like that, but I’d say it’s less comon to treat dns entries with any kind of preference instead of picking one out randomly.

    But as there’s a ton of various hardware/firmware around this of course isn’t conclusive, for your spesific case you need to dig out pretty deep to get the actual answer in your situation.


  • IsoKiero@sopuli.xyztoSelfhosted@lemmy.worldDNS?
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    have an additional external DNS server

    While I agree with you that additional DNS server is without a question a good thing, on this you need to understand that if you set up two nameservers on your laptop (or whatever) they don’t have any preference. So, if you have a pihole as one nameserver and google on another you will occasionally see ads on things and your pihole gets overrided every now and then.

    There’s multiple ways of solving this, but people often seem to have a misinformed idea that the first item on your dns server list would be preferred and that is very much not the case.

    Personally I’m running a pihole for my network on a VM and if that’s down for a longer time then I’ll just switch DNS servers from DHCP and reboot my access points (as family hardware is 99% on wifi) and the rest of the family has working internet while I’m working to bring rest of the infrastructure back on line, but that’s just my scenario, yours will most likely be more or less different.


  • Well, on channel description he clearly states that those are only motorized recreations of suggested perpetual motion machines. But on individual videos that info doesn’t seem to be that readily available, so it’s not totally wrong to say that the whole channel is a lie, but strictly speaking not excactly correct either.

    Some of those gadgets would make a nice desktop toy, obviously with a usb power brick or batteries.


  • Back in the day with dial-up internet man pages, readmes and other included documentation was pretty much the only way to learn anything as www was in it’s very early stages. And still ‘man <whatever>’ is way faster than trying to search the same information over the web. Today at the work I needed man page for setfacl (since I still don’t remember every command parameters) and I found out that WSL2 Debian on my office workstation does not have command ‘man’ out of the box and I was more than midly annoyed that I had to search for that.

    Of course today it was just a alt+tab to browser, a new tab and a few seconds for results, which most likely consumed enough bandwidth that on dialup it would’ve taken several hours to download, but it was annoying enough that I’ll spend some time at monday to fix this on my laptop.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlMan pages maintenance suspended
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    10 days ago

    I mean that the product made in here is not the website and I can well understand that the developer has no interest of spending time for it as it’s not beneficial to the actual project he’s been working with. And I can also understand that he doesn’t want to receive donations from individuals as that would bring in even more work to manage which is time spent off the project. A single sponsor with clearly agreed boundaries is far more simple to manage.




  • IsoKiero@sopuli.xyztoLinux@lemmy.mlThe Insecurity of Debian
    link
    fedilink
    English
    arrow-up
    10
    ·
    12 days ago

    The threat model seems a bit like fearmongering. Sure, if your container gets breached and attacker can (on some occasions) break out of it, it’s a big deal. But how likely that really is? And even if that would happen isn’t the data in the containers far more valuable than the base infrastructure under it on almost all cases?

    I’m not arguing against SELinux/AppArmor comparison, SElinux can be more secure, assuming it’s configured properly, but there’s quite a few steps on hardening the system before that. And as others have mentioned, neither of those are really widely adopted and I’d argue that when you design your setup properly from the ground up you really don’t need neither, at least unless the breach happens from some obscure 0-day or other bug.

    For the majority of data leaks and other breaches that’s almost never the reason. If your CRM or ecommerce software has a bug (or misconfiguration or a ton of other options) which allows dumping everyones data out of the database, SElinux wouldn’t save you.

    Security is hard indeed, but that’s a bit odd corner to look at it from, and it doesn’t have anything to do with Debian or RHEL.


  • If I had to guess, I’d say that e1000 cards are pretty well supported on every public distribution/kernel they offer without any extra modules, but I don’t have any around to verify it. At least on this ubuntu I don’t find any e1000 related firmware package or anything else, so I’d guess it’s supported out of the box.

    For the ifconfig, if you omit ‘-a’ it doesn’t show interfaces that are down, so maybe that’s the obvious you’re missing? It should show up on NetworkManager (or any other graphical tool, as well as nmcli and other cli alternatives), but as you’re going trough the manual route I assume you’re not running any. Mii-tool should pick it up too on command line.

    And if it’s not that simple, there seems to be at least something around the internet if you search for ‘NVM cheksum is not valid’ and ‘e1000e’, spesifically related to dell, but I didn’t check that path too deep.


  • A part of it is because technology, specially a decade or so ago, had restrictions. Like with ADSL which often/always couldn’t support higher upload speeds due to the end user hardware, and the same goes with 4/5G today, your cellphone just doesn’t have the power to transmit as fast/far as the tower access point.

    But with wired connections, specially with fibre/coax, that doesn’t apply and money comes in to play. ISPs pay for the bandwidth to the ‘next step’ on the network. Your ‘last mile’ ISP buys some amount of traffic from the ‘state wide operator’ (kind-of, depends heavily on where you live, but the analogy should work anyways) and that’s where the “upload” and “download” traffic starts to play a part. I’m not an expert by any stretch here, so take this with a spoonful of salt, but the traffic inside your ISP’s network and going trough their hardware doesn’t cost ‘anything’ (electricity for the switches/routers and their maintenance is excluded as a cost of doing business) but once you push additional 10Gbps to the neighboring ISP it requires resources to manage that.

    And that (at least in here) where the asymmetric connections plays a part. Let’s say that you have a 1Gbps connection to youtube/netflix/whatever. The original source needs to pay for the network for the bandwidth for your stream to go trough in order to get a decent user experience. But the traffic from your ISP to the network is far less, a blunt analogy would be that your computer sends a request to the network ‘show me the latest Me. Beast video’ and youtube server says ‘sure, here’s a few gigabits of video’.

    Now, when everyone pays for the ‘next step’ connection by the actual amount of data consumed (as their hardware needs to have the capacity to take the load). On your generic home user profile, the amount downloaded (and going trough your network) is vastly bigger than the traffic going out of your network. That way your last mile ISP can negotiate with the ‘upstream’ operator to have capacity to take 10Gbps in (which is essentially free once the hardware is purchased) and that you only send 1Gbps out, so ‘upstream’ operator needs to have a lot less capacity going trough their network to ‘the other way’.

    So, as the link speeds and amount of traffic is billed separately, it’s way more profitable to offer 1Gbps down and 100Mbps up for the home user. This all is of course a gross simplification of everything and in real world things are vastly more complex with caching servers, multiple connections to the other networks and so on, but at the end every bit you transfer has a price and if you mostly offer to sink in the data your users want and it’s significantly less than the data your users push trough to the upstream there’s money to be made in this imbalance and that’s why your connection might be asymmetric.




  • IsoKiero@sopuli.xyztoLinux@lemmy.ml33 years ago...
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 days ago

    I’ve read Linus’s book several years ago, and based on that flimsy knowledge on back of my head, I don’t think Linus was really competing with anyone at the time. Hurd was around, but it’s still coming soon™ to widespread use and things with AT&T and BSD were “a bit” complex at the time.

    BSD obviously has brought a ton of stuff on the table which Linux greatly benefited from and their stance on FOSS shouldn’t go without appreciation, but assuming my history knowledge isn’t too badly flawed, BSD and Linux weren’t straight competitors, but they started to gain traction (regardless of a lot longer history with BSD) around the same time and they grew stronger together instead of competing with eachother.

    A ton of us owes our current corporate lifes to the people who built the stepping stones before us, and Linus is no different. Obviously I personally owe Linus a ton for enabling my current status at the office, but the whole thing wouldn’t been possible without people coming before him. RMS and GNU movement plays a big part of that, but equally big part is played by a ton of other people.

    I’m not an expert by any stretch on history of Linux/Unix, but I’m glad that the people preceding my career did what they did. Covering all the bases on the topic would require a ton more than I can spit out on a platform like this, I’m just happy that we have the FOSS movement at all instead of everything being a walled garden today.


  • IsoKiero@sopuli.xyztoLinux@lemmy.ml33 years ago...
    link
    fedilink
    English
    arrow-up
    7
    ·
    22 days ago

    That kind of depends on how you define FOSS. The way we think of that today was in very early stages back in the 1991 and the orignal source was distributed as free, both as in speech and as in beer, but commercial use was prohibited, so it doesn’t strictly speaking qualify as FOSS (like we understand it today). About a year later Linux was released under GPL and the rest is history.

    Public domain code, academic world with any source code and things like that predate both Linux and GNU by a few decades and even the Free Software Foundation came 5-6 years before Linux, but the Linux itself has been pretty much as free as it is today from the start. GPL, GNU, FSF and all the things Stallman created or was a part of (regardless of his conflicting personality) just created a set of rules on how to play this game, pretty much before any game or rules for it existed.

    Minix was a commercial thing from the start, Linux wasn’t, and things just refined on the way. You are of course correct that the first release of Linux wasn’t strictly speaking FOSS, but the whole ‘FOSS’ mentality and rules for it wasn’t really a thing either back then.

    There’s of course adacemic debate to have for days on which came first and what rules whoever did obey and what release counts as FOSS or not, but for all intents and purposes, Linux was free software from the start and the competition was not.


  • As a rule of thumb, if you pay more money you get a better product. With spinning drives that almost always means that more expensive drives (in average) run longer than cheaper ones. Performance is another metric, but balancing those is where the smoke and mirrors come into play. You can get a pretty darn fast drive for a premium price which will fail in 3-4 years or for a similar price you can get a bit slower drive which will last you a decade. And that’s in average. You might get a ‘cheap’ brand high-performance drive to run without any issues for a long long time and you might also get a brand name NAS drive which will fail in 2 years. Those averages start to play a role if you buy drives by a dozen.

    Backblaze (among others) publish their very real world statistics on which drives to choose (again, on average), but for home gamer that’s not usually an option to run enough drives to get any benefits from statistical point of view. Obviously something from HGST or WD will most likely outperform any no-name brand from aliexpress and personally I’d only get something rated for 24/7 use, like WD RED, but it’s not a guarantee that those will actually run any longer as there’s always deviations from their gold standard.

    So, long story short, you will most likely get a significantly different results depending on which brand/product line you choose, but it’s not guaranteed, so you need to work around that with backups, different raid scenarios (likely raid 5 or 6 for home gamer) and acceptable time for downtime (how fast you can get a replacement, how long it’ll take to pull data back from backups and so on). I’ll soon migrate my setup from somewhat professional setting to more hobbyist one and with my pretty decent internet connectivity I most likely go with 2-1-1 setup instead of the ‘industry standard’ 3-2-1 (for serious setup you should probably learn what those really mean, but in short: number of copies existing - number of different storage media - number of offsite copies),

    On what you really should use, that depends heavily on your usage. For a media library a 5400rpm bigger drive might be better than a bit smaller 7200rpm drive and then there’s all kinds of edge cases plus potential options for ssd-caching and a ton of other stuff, so, unfortunately, the actual answer has quite a few of variables, starting from your wallet.



  • In theory you just send a link to click and that’s it. But, as there always is a but, your jitsi setup most likely don’t have massive load balancing, dozens of locations for servers and all the jazz which goes around random network issues and everything else which keeps the internet running.

    There’s a ton of things well outside your control and they may or may not bite you in the process. Big players have tons of workforce and money to make sure that kind of things don’t happen and they still do now and then. Personally, for a single use scenario like yours, I wouldn’t bother, but I’m not stopping you either, it’s a pretty neat thing to do. My (now dead) jitsi instance once saved a city council meeting when teams had issues and that got me a pretty good bragging rights, so it can be pretty rewarding too.