The diversity of Linux distributions is one of its strengths, but it can also be challenging for app and game development. Where do we need more standards? For example, package management, graphics APIs, or other aspects of the ecosystem? Would such increased standards encourage broader adoption of the Linux ecosystem by developers?

    • steeznson@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      There is a separate kernel which is being written entirely in rust from scratch that might interest you. I’m not sure if this is the main one https://github.com/asterinas/asterinas but it is the first one that came up when I searched.

      By the tone of your post you might just want to watch the world burn in which case I’d raise an issue in that repo saying “Rewrite in C++ for compatibility with wider variety of CPU archs” ;)

      • muusemuuse@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 days ago

        I’m of the opinion that a full rewrite in rust will eventually happen, but they need to be cautious and not risk alienating developers ala windows mobile so right now it’s still done in pieces. I’m also aware that many of the devs who sharpened their teeth on the kernel C code like it as it is, resist all change, and this causes lots of arguments.

        Looking at that link, I’m not liking the MPL.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    Where app data is stored.

    ~/.local

    ~/.config

    ~/.var

    ~/.appname

    Sometimes more than one place for the same program

    Pick one and stop cluttering my home directory

  • TrivialBetaState@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    3 days ago

    While all areas could benefit in terms of stability and ease of development from standadization, the whole system and each area would suffer in terms of creativity. There needs to be a balance. However, if I had to choose one thing, I’d say the package management. At the moment we have deb, rpm, pacman, flatpak, snap (the latter probably should not be considered as the server side is proprietary) and more from some niche distros. This makes is very difficult for small developers to offer their work to all/most users. Otherwise, I think it is a blessing having so many DEs, APIs, etc.

  • JuxtaposedJaguar@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 days ago

    Each monitor should have its own framebuffer device rather than only one app controlling all monitors at any time and needing each app to implement its own multi-monitor support. I know fbdev is an inefficient, un-accelerated wrapper of the DRI, but it’s so easy to use!

    Want to draw something on a particular monitor? Write to its framebuffer file. Want to run multiple apps on multiple screens without needing your DE to launch everything? Give each app write access to a single fbdev. Want multi-seat support without needing multiple GPUs? Same thing.

    Right now, each GPU only gets 1 fbdev and it has the resolution of the smallest monitor plugged into that GPU. Its contents are then mirrored to every monitor, even though they all have their own framebuffers on a hardware level.

      • JuxtaposedJaguar@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        Yes and no. It would solve some problems, but because it has no (non-hacky) graphics acceleration, most DEs wouldn’t use it anyway. The biggest benefit would be from not having to use a DE in some circumstances where it’s currently required.

  • gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    4 days ago

    I’m not sure whether this should be a “standard”, but we need a Linux Distribution where the user never has to touch the command line. Such a distro would be beneficial and useful to new users, who don’t want to learn about command line commands.

    And also we need a good app store where users can download and install software in a reasonably safe and easy way.

    • RawrGuthlaf@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      I really don’t understand this. I put a fairly popular Linux distro on my son’s computer and never needed to touch the command line. I update it by command line only because I think it’s easier.

      Sure, you may run into driver scenarios or things like that from time to time, but using supported hardware would never present that issue. And Windows has just as many random “gotchas”.

      • lumony@lemmings.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        I try to avoid using the command line as much as possible, but it still crops up from time to time.

        Back when I used windows, I would legitimately never touch the command line. I wouldn’t even know how to interact with it.

        We’re not quite there with Linux, but we’re getting closer!

    • AugustWest@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 days ago

      Why do people keep saying this? If you don’t want to use the command line then don’t.

      But there is no good reason to say people shouldn’t. It’s always the best way to get across what needs to be done and have the person execute it.

      The fedora laptop I have been using for the past year has never needed the command line.

      On my desktop I use arch. I use the command line because I know it and it makes sense.

      Its sad people see it as a negative when it is really useful. But as of today you can get by without it.

      • lumony@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        It’s always the best way to get across what needs to be done and have the person execute it.

        Sigh. If you want to use the command line, great. Nobody is stopping you.

        For those of us who don’t want to use the command line (most regular users) there should be an option not to, even in Linux.

        Its sad people see it as a negative when it is really useful.

        It’s even sadder seeing people lose sight of their humanity when praising the command line while ignoring all of its negatives.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 days ago

    Domain authentication and group policy analogs. Honestly, I think it’s the major reason it isn’t used as a workstation OS when it’s inherently more suited for it than Windows in most office/gov environments. But if IT can’t centrally managed it like you can with Windows, it’s not going to gain traction.

    Linux in server farms is a different beast to IT. They don’t have to deal with users on that side, just admins.

    • Lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      An immutable distro would be ideal for this kind of thing. ChromeOS (an immutable distro example) can be centrally managed, but the caveat with ChromeOS in particular is that it’s management can only go through Google via their enterprise Google Workspace suite.

      But as a concept, this shows that it’s doable.

      • silly goose meekah@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        I don’t think anyone was saying it’s impossible, just that it needs standardization. I imagine windows is more appealing to companies when it is easier to find admins than if they were to use some specific linux system where only a few people are skilled to manage it.

    • fxdave@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      4 days ago

      I’ve never understood putting arbitrary limits on a company laptop. I had always been seeking for ways to hijack them. Once I ended up using a VM, without limit…

      • Lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 days ago

        TL;DR - Because people are stupid.

        One of my coworkers (older guy) tends to click on things without thinking. He’s been through multiple cyber security training courses, and has even been written up for opening multiple obvious phishing emails.

        People like that are why company-owned laptops are locked down with group policy and other security measures.

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I mean, it sucks, but the stupid shit people will do with company laptops…

  • irotsoma@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    Not offering a solution here exactly, but as a software engineer and architect, this is not a Linux only problem. This problem exists across all software. There are very few applications that are fully self contained these days because it’s too complex to build everything from scratch every time. And a lot of software depends on the way that some poorly documented feature worked at the time that was actually a bug and was eventually fixed and then breaks the applications that depended on it, etc. Also, any time improvements are made in a library application it has potential to break your application, and most developers don’t get time to test the every newer version.

    The real solution would be better CI/CD build systems that automatically test the applications with newer versions of libraries and report dependencies better. But so many applications are short on automated unit and integration tests because it’s tedious and so many companies and younger developers consider it a waste of time/money. So it would only work in well maintained and managed open source types of applications really. But who has time for all that?

    Anyway, it’s something I’ve been thinking about a lot at my current job as an architect for a major corporation. I’ve had to do a lot of side work to get things even part of the way there. And I don’t have to deal with multiple OSes and architectures. But I think it’s an underserved area of software development and distribution that is just not “fun” enough to get much attention. I’d love to see it at all levels of software.

  • Mio@feddit.nu
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    Configuration gui standard. Usually there is a config file that I am suppose to edit as root and usually done in the terminal.

    There should be a general gui tool that read those files and obey another file with the rules. Lets say it is if you enable this feature then you can’t have this on at the same time. Or the number has to be between 1 and 5. Not more or less on the number. Basic validation. And run the program with --validation to let itself decide if it looks good or not.

      • Einar@lemm.eeOP
        link
        fedilink
        arrow-up
        1
        ·
        5 days ago

        I agree. OpenSuse should set the standards in this.

        Tbf, they really need a designer to upgrade this visually a bit. It exudes its strong “Sys Admin only” vibes a bit much. In my opinion. 🙂

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    ARM support. Every SoC is a new horror.

    Armbian does great work, but if you want another distro you’re gonna have to go on a lil adventure.

    • Lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 days ago

      Systemd is fine. This sounds like an old sysadmin who refuses to learn because “new thing bad” with zero logic to back it up.

      • chaoticnumber@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        As a former sysadmin, there is plenty of logic in saying that. I have debugged countless systems that were using systemd, yet somehow the openrc ones just chug along. In the server space systemd is a travesty.

        In the desktop space however, i much prefer systemd. Dev environments as well. So yes thst is where “it’s fine”. More than fine, needed!

        I just hate this black and white view of the world, I cant stand it. Everything has its place, on servers you want as small a software footprint as possible, on desktop you want compatibility.

    • steeznson@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      Yes, I find that dude to be very disagreeable. He’s like everything that haters claim Linus Torvalds is - but manifested IRL.

      • lumony@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        If the people criticizing him could roll up their sleeves and make better software, then I’d take their criticisms seriously.

        Otherwise they’re “just a critic.”

  • enumerator4829@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    Stability and standardisation within the kernel for kernel modules. There are plenty of commercial products that use proprietary kernel modules that basically only work on a very specific kernel version, preventing upgrades.

    Or they could just open source and inline their garbage kernel modules…

  • asudox@lemmy.asudox.dev
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    Flatpak with more improvements to size and sandboxing could be accepted as the standard packaging format in a few years. I think sandboxing is a very important factor as Linux distros become more popular.

  • SwingingTheLamp@midwest.social
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    One that Linux should’ve had 30 years ago is a standard, fully-featured dynamic library system. Its shared libraries are more akin to static libraries, just linked at runtime by ld.so instead of ld. That means that executables are tied to particular versions of shared libraries, and all of them must be present for the executable to load, leading to the dependecy hell that package managers were developed, in part, to address. The dynamically-loaded libraries that exist are generally non-standard plug-in systems.

    A proper dynamic library system (like in Darwin) would allow libraries to declare what API level they’re backwards-compatible with, so new versions don’t necessarily break old executables. (It would ensure ABI compatibility, of course.) It would also allow processes to start running even if libraries declared by the program as optional weren’t present, allowing programs to drop certain features gracefully, so we wouldn’t need different executable versions of the same programs with different library support compiled in. If it were standard, compilers could more easily provide integrated language support for the system, too.

    Dependency hell was one of the main obstacles to packaging Linux applications for years, until Flatpak, Snap, etc. came along to brute-force away the issue by just piling everything the application needs into a giant blob.

    • Ferk@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 days ago

      interoperability == API standardization == API homogeneity

      standardization != monopolization