I support this law (fuck cars), but if you step into the street thinking an oncoming car won’t destroy you like a pinata stuffed with ketchup packets, you have survived the luckiest lawsuit-free 28 years.
I support this law (fuck cars), but if you step into the street thinking an oncoming car won’t destroy you like a pinata stuffed with ketchup packets, you have survived the luckiest lawsuit-free 28 years.
Pretty cool. I wonder if this could be scaled up to a more life-sized print? Maybe go-kart sized???
The STL files are $27 - not free, but I’m sure the designer put a ton of hours into this.
And this behavior is somehow sold to the public as a way to boost the economic wellness of the people living under the isolationist programs, but instead it enables profiteering corporations to exert more control over the artificially narrowed market space.
Locking the door with the fox(es) in the henhouse.
They could have gone with a “visor” frame design that would have been more fashionable, but I think this is pretty impressive for demonstrating the bare minimum amount of plastic needed to house holographic transparent displays, internal/external tracking sensors, and a sound system.
What they claim these glasses can do is absolutely incredible (we won’t really know because they are only being used internally for further development).
There’s a place for this, if it’s entertaining. Memes, comedy, maybe some more legitimate uses too. A lot of YouTube is some guy just sitting in front of a camera in the most boring perfectly curated home office. Throw in something visually interesting that enhances the subject matter and I may watch more.
This would ideally become standardized among web servers with an option to easily block various automated aggregators.
Regardless, all of us combined are a grain of rice compared to the real meat and potatoes AI trains on - social media, public image storage, copyrighted media, etc. All those sites with extensive privacy policies who are signing contracts to permit their content for training.
Without laws (and I’m not sure I support anything in this regard yet), I do not see AI progress slowing. Clearly inbreeding AI models has a similar effect as in nature. Fortunately there is enough original digital content out there that this does not need to happen.
If it doesn’t offer value to us, we are unlikely to nurture it. Thus, it will not survive.
I don’t want to get in the way of your argument re. Usenet, but spinning hard drives will last longer if they stay on. Starting and stopping the spindle motor will impart the greatest wear. As long as you have the thermals managed, a spinning disk is a happy disk.
Just curious if you had a reference for this statement since it seems to be false in multiple ways.
This also works for binary cable or interface connectors formerly known as “male” and “female”.
I want Ars content to be part of whatever training data is provided to the best models. How does that get done without appearing like they are being bought?
Even if their contract explicitly states that it is a data sharing agreement only and the products of the media organization (articles/investigations) are not grounds for breach or retaliation, it is assumed that there is now some impartiality in future reporting.
So, for all media companies, the options seem to be:
Is there a GPL or other license structure that permits data sharing for LLM training in a way that it does not get transformed into something evil?
I pay for Nebula and try to watch as much as I can there. The content is more “pleasant department store” and less “Mexican public market”.
I do watch YouTube regularly when channel-surfing, but if I ever see an ad (which happens only on mobile devices), I close it immediately and do something else. It’s not that I don’t think I should be able to watch everything for $0, but YouTube ads are so jarring, random, irrelevant and just make me sick. They literally ruin whatever I was watching and make me sad to exist.
It can be exhausting to wade through the absolute meat market of click bait titles and thumbnails to find something that not only looks interesting but won’t abuse me with infomercial-form audio/visuals.
YouTube enables and promotes the “content creators” who abuse human psychology to accumulate views, likes, subscriptions, etc. The best thing that could happen is they continue to be exposed as the drug dealer they are.
You would need to run the LLM on the system that has the GPU (your main PC). The front-end (typically a WebUI) could run in a docker container and make API calls to your LLM system. Unfortunately that requires the model to always be loaded in the VRAM on your main PC, severely reducing what you can do with that computer, GPU-wise.
I absolutely agree, but I have a sneaking but unfounded suspicion that many decision makers don’t want to prove out this theory.
WFH during the pandemic already triggered a panic from those whose income depends on the status quo of urban commute. To them, demonstrating we don’t need offices OR personal automobiles is a dangerous experiment to conduct in one of the largest metro areas in the world.
My god, what if it works? What would we do with all this pavement and gasoline?!
This article is kind of shitty. It looks like the content was mostly taken from the general media coverage that was going around pre-pandemic, edited to incorporate the latest Meta financials. This happens every time new numbers are published. R&D is not cheap and a vast amount of Meta’s research has not been converted to revenue.
Reality Labs is also where Meta’s AI development is happening, so their costs are not just VR-related research. It’s also LLM and other machine learning domains. There is some crossover, such as computer vision, but a lot of their research does not directly apply to what we currently consider VR/MR/AR.
Quest 2 sold over 20 million units, and nearly as many Quest 2/3 have been sold as X-Box Series X/S consoles. Quest products are frequently in a sold-out state on Amazon. That is not an “obvious lack of success”. The only thing obvious is the clueless premise of the entire article (what is “MAGR” anyway?). Framing VR as a gaming platform is another sign that the article was copy-pasted from something written many years ago.
Quest 3 is awesome. VR is still growing in many ways thanks to faithful innovators and dreamers, and without Meta we would be nowhere close to where we are today. There would be no Apple Vision Pro. Finally, after a decade, we are beginning to see real competition in the industry which is already accelerating progress and further investment from Meta, Apple, Google, etc. “Microsoft has not engaged with this technology at all” – what is Microsoft Mesh, then?
It seems the only way to justify the expenses from Meta’s perspective is the long game that results in them being a dominant platform for VR apps. I think it’s generally accepted that nobody wants this outcome, but meanwhile I am thankful for their investment. At this time, the Quest 3 is a relatively open platform as far as Android-based devices go. You can ADB into it and side load software, and when connected to a PC there are numerous debugging capabilities.
Look at this in the same light as the 2nd amendment: bearing arms was more compatible with society when the “arms” were mechanically limited in their power/capability. Gun laws have matured to some degree since then, restricting or banning higher powered weaponry available today.
Maybe slander/defamation protections are not agile or comprehensive enough to curtail the proliferation of AI-generated material. It is certainly much easier to malign or impersonate someone now than ever before.
I really don’t think software will ever be successfully restricted by the government, but the hardware that is behind it might end up with some form of firmware-based lockout technology that limits AI capabilities to approved models providing a certificate signed by the hardware maker (after vetting the submission for legally-mandated safety or anti-abuse features).
But the horse has already left the barn. Even the current level of generative AI technology is fully capable of fooling just about anyone, and will never be stopped without advancements in AI detection tools or some very aggressive changes to the law. Here come the historic GPU bans of the late 20’s!
I love that this sounds like a threat and I cannot stop laughing
Sure - that’s why people are resorting to stealing butter from grocery stores. Positively thriving.