Hacker Newsnew | past | comments | ask | show | jobs | submit | timlod's commentslogin

So, I checked, and this life cycle assessment (first link I could find, seems very comprehensive) states a 27x return on energy: https://www.vestas.com/en/sustainability/environment/lifecyc...

This as opposed to a tweet about someone who 'read a life cycle analysis article in some engineering journal like 10 years ago'.

Please don't spread misinformation.


it's a bot account, the comment will get deleted after a while. Just look at it's history there are many on hn, reddit and other moderated forums.


Huh? This user does not look like a bot account at all to me. The account was created in 2016, and the user regularly comments on technical posts - sometimes encouraging developers and providing useful linls.

On the other hand, they seem to have rather 'anti-woke' views. They already posted the same stupid (sorry) Twitter link in another thread and got similar responses, by the way.

https://news.ycombinator.com/item?id=43167067

So, they are certainly no bot by any sensible definition but rather someone whos is hindered by their strongly-held views and limited will or ability to critically evaluate sources. I wish them the best.


I've used it before for realtime-uses (not production though where you'd need 100% guarantees for no drop-outs), latency has not been an issue. I think you essentially get the latency of the plugins you're using since this is a JUCE wrapper.

Ultimately it depends on how much work you do and how efficient an audio thread you built. pedalboard is not a library which does audio playback itself, it just effects buffers you give it. I used python-sounddevice, which are bindings for PortAudio - if you don't use much CPU you can comfortably run plugins in realtime.

Obviously you're still beholden to the GIL in Python (until further notice) so if worse comes to worst you might experience the unlucky dropout.


Ah that's really interesting, thank you!


It's not virtual analog tech scientists building a better musical instruments :)

I had no clue what a fog harp is, turns out it's used for harvesting water - interesting tech!


It was developed in Japan where they found the traditional fog horn, a horn used by ships to announce their presence in low visibility conditions, to be an ugly drone harsh on the ear. The fog harp was designed to have a rich, melodic pleasance to it, to give the mind a moment of respite from the stress of navigating essentially blindly


The title is a bit confusing as open-source separation of ... reads like source separation, which this is not. Rather, it is a pitch detection algorithm which also classifies the instrument the pitch originated with.

I think it's really neat, but the results look like it could take more time to fix the output than using a manual approach (if really accurate results are required).


Thanks for clarifying.

In fairness to the author, he is still at high school: https://matthew-bird.com/about.html

Amazing work for that age.


He's definitely a talent to watch!


Wow, I didn't see that. Great to see this level of interest early on!


Is “source separation” better known as “stem separation” or is that something else? I think the latter term is the one I usually hear from musicians who are interested in taking a single audio file and recovering (something approximating) the original tracks prior to mixing (i.e. the “stems”).


Audio Source Separation I think is the general term used in research. It is often applied to musical audio though, where you want to do stem separation - that's source separation where you want to isolate audio stems, a term referring to audio from related groups of signals, e.g. drums (which can contain multiple individual signals, like one for each drum/cymbal).


Stem separation refers to doing it with audio playback fidelity (or an attempt at that). So it should pull the bass part out at high enough fidelity to be reused as a bass part.

This is a partly solved problem right now. Some tracks and signal types can be unmixed easier than others, it depends on what the sources are and how much post-processing (reverb, side chaining, heavy brick wall limiting and so on)


> This is a partly solved problem right now.

I'd agree with the partly. I have yet to find one that either isolates an instrument as a separate file or removes one from the rest of the mix that does not negatively impact the sound. The common issues I hear are similar to the early internet low bit rate compression. The new "AI" versions are really bad at this, but even the ones available before the AI craze were still susceptible


I'm far (far) from an expert in this field, but when you think about how audio is quantized into digital form, I'm really not sure how one solves this with the current approaches.

That is: frequencies from one instrument will virtually always overlap with another one (including vocals), especially considering harmonics.

Any kind of separation will require some pretty sophisticated "reconstruction" it seems to me, because the operation is inherently destructive. And then the problem becomes one of how faithful the "reproduction" is.

This feels pretty similar to the inpainting/outpainting stuff being done in generative image editing (a la Photoshop) nowadays, but I don't think anywhere near the investment is being made in this field.

Very interested to hear anyone with expertise weigh in!


I won't say expertise, but what I've done recently:

1) used PixBim AI to extract "stems" (drums, bass, piano, all guitars, vocals). Obviously a lossless source like FLAC works better than MP3 here

2) imported the stems to ProTools.

3) from there, I will usually re-record the bass, guitars, pianos and vocals myself. Occassionally the drums as well.

This is a pretty good way I found to record covers of tracks at home, re-using the original drums if I want to, keeping the tempo of the original track intact etc. I can embellish/replace/modify/simplify parts that I re-record obviously.

It's a bit like drawing using tracing paper, you're creating a copy to the best of your ability, but you have a guide underneath to help you with placement.


It's not really digital quantisation that's the problem, but everything else that happens during mixing - which is a much more complicated process, especially for pop/rock/electronic etc., than just "sum all the signals together".

There's a bunch of other stuff that happens during and after summing which makes it much harder to reliably 100% reverse that process.


I didn't mean to say that quantization was the problem, just that you're basically trying to pick apart a "pixel" (to continue my image-based analogy) that is a composite of multiple sounds (or partially-transparent image layers).

I was sincere when I said:

> I'm really not sure how one solves this with the current approaches.

I was hoping someone would come along and say it is, in fact, possible. :)


Source separation is a general term, stem separation is a specific instance of source separation.


No, it doesn't read like that. The hyphen completely eliminates any possible ambiguity.


The title of the submission was modified. It you read the article it says:

Audio Decomposition [Blind Source Seperation]


Maybe added later by OP? Because there is no hyphen in the article’s subtitle.

>Open source seperation of music into constituent instruments.


The complaint:

> The title is a bit confusing as open-source separation of ... reads like source separation, which this is not.


I'm a Data Scientist currently consulting for a project in the Real Estate space (utilizing LLMs). I understand the article is hyperbole for perhaps comedic purposes, and actually do perhaps 80% align with a lot of the authors views, but it's a bit much.

There is industry-changing tech which has become available, and many orgs are starting to grasp it. I won't deny that there's probably a large percentage of projects which fall under what the author describes, but these claims are doing a bit of a disservice to the legitimately amazing projects being worked on (and the competent people performing that work).


> I'm a Data Scientist currently consulting for a project in the Real Estate space (utilizing LLMs).

Consultants are obviously making huge amounts of money implementing LLMs for companies. The question is whether the company profits from it afterwards.


Time will tell, but I would cautiously say say yes.

Note that I don't usually work in that particular space (I prefer simple solutions and don't follow the hype), didn't sell myself using 'AI' (I was referred), and also would always tell a client if I believe there isn't much sense in a particular ask.

This particular project really uniquely benefits from this technology and would be much harder, if possible at all, otherwise.


Would you recommend to still get into freelance consulting (with a ML background) at this point in time? Or will the very technology you're consulting about, replace you very soon? AutoML, LLMs etc..


I'd say it depends on what your other options are. I don't think the technology will replace me soon, even at the rate I see it improving. At this point it's still a tool we can use to deliver faster, if we use it wisely.

Especially about ChatGPT et al. - I use it daily, but having the proper foundation to discern and verify its output shows me that it's still very far from being a competent programmer for any but the 'easy' tasks which have been solved hundreds of times over.

Like I hinted, I also view all of this hype sceptically. I dislike the 'we need AI in our org now!' types and am not planning on taking on projects if I don't see their viability. But there's obviously still a lot of demand and people offering services like those in TFA who're just looking to cash in, and that seems to work.

If you can find projects you believe you can make a difference in with your background, why not give it a shot?


Thank you!

> I'd say it depends on what your other options are.

> why not give it a shot?

Your right. If it fails because of automation by ML, most other career paths in the tech sphere would do, too.


Pretty cool! Tried it with RHCPs Dani California (https://lamucal.ai/songs/red-hot-chili-peppers/dani-californ...) and there's a lot of wrong chords still. Impressive nonetheless, and already quite useful in the song-part recognition (assuming it's all the ML)! Lyrics seem right too.

The source separation only seems to be available when downloading their app, which I didn't do, so I can't comment on that.


I downloaded and tried their app, experiencing the audio source separation feature, and ended up with five tracks (piano, vocals, drums, bass, and others). It sounds pretty good, but unfortunately, there is no guitar track.


I do like the Task Energy plot towards the end (second to last plot).

It shows how wasteful these new Intel processors are compared to their direct competition.


I’ve been using Windows throughout my childhood and start of my CS career - now I use Windows for specific software (audio/music) and Linux for developing (about 8 years I guess). I had a 1-year stint with macOS because I was developing an iOS app, and have been the troubleshooter for people with macs at my previous job, so I consider myself somewhat ‘multilingual’ when it concerns OSs.

As a power user, Linux is just so much nicer. I constantly get frustrated, especially with macOS, about stuff that I can’t easily. In Linux my stuff works and if it doesn’t it can be made to work (usually). In Windows/Mac it’ll often take considerable effort to make the system work the way I want, or it’s just not possible.

I think with proprietary software ‘it just works’ is only a thing if you’re happy with the basic experience that is tuned to the average person. If you have more complex needs, you should be using Linux (and if you know your stuff or use the right distro, things will likely also ‘just work’).


Well, weird movements in games should be a thing of the past in the near future, as we can begin to extract motion capture data from videos of normal people acting normally.

I think it depends on the type of game you have, but I wouldn't underestimate this type of technology for say, open world games where it might make the game more immersive due to convincing realism.


> Well, weird movements in games should be a thing of the past in the near future, as we can begin to extract motion capture data from videos of normal people acting normally.

I think you misread my posts. We don't have awkward animations because our mocap isn't good enough, we have awkward animations because typical human motion looks awkward - our brains just mostly ignore that.

People are awkward; we don't actually want characters in games/movies/etc to be like real people. Very few movies, for example, would be well served by conversations frequently and for non-plot-related reasons being interrupted by loud noises, having people talk over each other and nonverbally try to figure out who gets to speak, having characters ask "What?" and then begin to reply without waiting for the answer because their brain caught up half a second later, etc.


I see, I did misread it!

But I don’t think I can agree with what you meant then - why would our brain mostly ignore it in real life but not in video games? Where is the transition from something feeling real (in an immersive way) to us not liking it because it feels awkward, and why does it happen? I’m imagining a “perfect simulation” game which is like real life in ways that matter/don’t get in your way in terms of gameplay - I think everyone would be awed (of course this can be argued though). What would need to degrade in terms of realism for it to seem awkward/not be immersive anymore?

I agree with the movie example, but in a game you don’t have to watch the mundane - it’s just background “noise” to make the world believable.


That already exists in some forms, most notably perhaps in Dubler 2. I’m also working on an open source version of something similar, albeit quite involved and aiming at professional real-time performance and drumkit augmentation (if all goes well to share on HN this month for the first time :))


Hey that is cool! lemme know if you need any help, send mail if required!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: