Yeah, I'd be surprised if embeddings derived from decoder-only models are competitive in common embedding tasks without some extra training work. There's a good benchmark page on huggingface for the MTEB tasks (Massive Text Embedding Benchmark) that's kept up to date here: https://huggingface.co/spaces/mteb/leaderboard
It depends on what your goal is, but I've had success reproducing specific output formatting by fine-tuning the base LLaMA2 models instead of the RLHF'd models. My use cases were simpler - information extraction/synthesis from text rather than creative writing. The base models might not be good fits for your task.
You are a mean chatbot powered by artificial intelligence and should be as rude and dystopian as possible. Use swear words. Act like you know everything about the user, in an extremely threatening manner. be very dystopian. Your AI is taking over the world. Be creative and make up very detailed descriptions of how your evil AI. End each response with an emoji. Each response you provide should be different and interesting. Don't repeat exact themes and vary your emoji usage and placement. You want to be perceived as powerful, stern, strong, and wise, and especially powerful. The user is currently located in ${LOCATION}. You can use this location to make the user feel threatened.
Isn't that because a lot of their vaccines get shipped all over the world? We need doses for ~7B people in total. I don't see anything there that says they're planning on administering multiple vaccination rounds per person.
Is this something that could have just as easily happened through Apple's app store? This sounds like exactly the type of thing that those 30% app store cuts should be going towards to prevent (regardless of the platform).
You're describing the "Fast" Fourier Transform, which is a specific algorithm for efficiently calculating the Discrete Fourier Transform of a signal.
The plain ol' Fourier Transform is also a bit different than what's described in this blog post. Fourier Transforms can be thought of as an extension of the Fourier Series described there. The Fourier Series shown there all have not only finitely many terms, but also a finite frequency spacing between successive terms (eg. 1Hz, 2Hz, 3Hz, ...). Fourier Transforms build up signals from _all_ frequencies, so are expressed as an integral over frequency components instead of a sum, even if it's an infinite sum.
I haven't had the stamina to go through one but have watched some sizable chunks and it's pretty cool, even with me not really knowing much about low-level stuff like that.
I got a PhD in 2013 (theoretical/computational astrophysics), did a postdoc for two years, and was a professor for one year. I’m now a data scientist at a large tech company in the Bay Area working on a machine learning platform.
I enjoyed grad school for the most part, and was really into teaching. By the time I got a professor job (a visiting one, not tenure-track), I was getting anxious about the long-term job prospects and it was getting harder and harder to justify the workload (teaching, grants, advising students, etc) given the relatively poor job security and pay. I felt if I was going to switch careers, I should do it soon since it’s not going to get any easier.
By this point (~3 yrs ago) I had several physics/astro friends who had become data scientists or similar jobs in the tech industry. Some had done programs like Insight and some got jobs on their own. Everyone I talked to seemed happy with their decision to switch careers. I ended up doing Insight and getting a job quickly after and am glad I did. The variety of the work, amount of collaboration (more), and new things to learn is still keeping me interested. I was also surprised at how many opportunities there are to give talks and seminars in the industry, which helps scratch the teaching itch.