This image woke me up. Downloading it now. I had not realized that models could handle this type of scene composition nowadays.
But what is the difference between the image description and the prompt on the ‘full generation parameters’ ?
This image woke me up. Downloading it now. I had not realized that models could handle this type of scene composition nowadays.
But what is the difference between the image description and the prompt on the ‘full generation parameters’ ?
I saw a year ago a comment I often think about, which is that India’s economy, where a lot of call center and remote workers are, is “token-based”. LLMs are going to hurt their labor but they are also the best placed to profit off LLMs, having already many established consumers.
deleted by creator
If you have a craft capable of launch and re-entry, do you even try?
It is basically having to choose between two inhospitable nuclear barren lands. But one has oxygen on it. Hell yes I try.
Install text-generation-webui, check their “whisper stt” option, and you can talk with a computer. As a non native I prefer to read the english output than listen to it but they do provide TTS as well.
Straight from Avalon.
It is called finetuning. I haven’t tried it but oobagooba’s text-generation-webui has a tab to do it and I believe it is pretty straightforward.
Fine tune a base model on your dataset and then tou will then need to format your prompt in the way your AIM logs are organized. e.g. you will need to add “<ch00f>” add the end of your text completion task. It will complete it in the way it learnt it.
If you don’t have a the GPU for it, many companies offer fine-tuning as a service like Mistral
Not directly no.
It may be able to code one (the code is relatively short and well known) and give training program, and then you would need to spend a few trillion tokens to make it generate data.
I use it almost daily.
It does produce good code. It does not reliably produce good code. I am a programmer, it makes my job 10x faster and I just have to fix a few bugs in the code it usually generates. Over time, I learned what it is good at (UI code, converting things, boilerplate) and what it struggles with (anything involving newer tech, algorithmic understanding, etc.)
I often refer to it as my intern: It acts like an academically trained, not particularly competent, but very motivated, fast typing intern.
But then I am also working on the field. Prompting it correctly is too often dismissed as a skill (I used to dismiss it too). It needs more understanding than people give it credit for.
I think that like many IT tech it will go from being a dev tool to everyday tool gradually.
All the pieces of the puzzle to be able to control a computer by voice using only natural language are there. You don’t realize how big it is. Companies haven’t assembled it yet because it is actually harder to monetize on it than code it. I think probably Apple is in the best position for it. Microsoft is going to attempt and will fail like usual and Google will probably put a half-assed attempt at it. I’ll personally go for the open source version of it.
That’s really interesting! It shows which communities share users. I am part of jlai.lu, a french-speaking community that is relatively isolated by also slrpnk.net that seems very spread out!
Would it make sense to compute the standard deviation of each instance’s communities? It would give an idea of which are islands and which are more extended. Not sure if it makes sense to compute it more on 2 dimensions or on the original 21934 though.