Hacker Newsnew | past | comments | ask | show | jobs | submit | cmaury's commentslogin

Are you able to pull upcoming titles? All I want is a weekly/monthly list of books by authors I've ready which are coming out, and I've not been able to find it or to build it.


A couple years I looked for a similar service and failed to find it. I did however find this incredible podcast network called New Books In where they interview authors about their new books. It's a massive network that's broken down by categories that can get pretty niche. Everything from "Digital Humanities" to "Diplomatic History" to "Critical Theory". Episodes appear in multiple categories so broad categories like "Science" also exist

https://newbooksnetwork.com/subscribe

It's definitely biased towards academia which I personally see as a pro not a con


If the data is present in one of the extractors, yes, but I think only Amazon and similar stores keep this kind of data right now. We don't have a extractor for Amazon yet.

After v1.0.0 is out I plan to add the ability to add books manually to the database, at which point we'll be able to start improving the database without relying on third-party services.


You can get an RSS feed from https://bookfeed.io with authors you want to track. Been using it for years at this point :-)


Thanks for sharing. This is exactly the type of utility that vibecoding is for. It takes 5 secons to ask GPT to write a scripr to do this tailored to your specific use case. It's way faster than trying to get someone elses repo up and running.


Selfware.

https://old.reddit.com/r/ChatGPTCoding/comments/1lusr07/self...

Gonna be lots of posts of selfware like that soon.


I like it, though I'm sure we'll end up being stuck with "vibe ware"


I think you either coined (kudos) or spotted the true "term du jour" here.-


people don't even get it :-]


Sure thing ...

And, yes, indeed, AI-coding is order-of-magnitude having an effect along the lines that "low-code" was treading ...

... also, for less-capable coders or "borderline" coders the effort/benefit equation has radically shifted.-


Riffs need a downvote button. I'm not sure what the team's training/eval approach is, but you need a user signal for poor quality results, so you can improve over time.

ex: https://www.riffusion.com/riffs/2a42259b-1557-4913-bc04-5f0a...

Amazing progress since the original announcement.


But do you have a YouTube channel? (jk)

Are you working remotely with this set up?


I don’t have a YT channel, seems like I should right? I work remotely and use starlink for internet access. It’s mounted between my solar panels.


Hey, I'm a legally blind engineering manager (stargardt's macular degeneration). The accessibility community was hugely welcoming when I started to lose my vision at about the same age you are. I'm happy to talk through my experience and be helpful in anyway I can. chrmaury at gmail.


While Google may be the leader in Machine Learning tools, they are way behind when it comes to Machine Learning services.

They've only just now released a Natural Language Understanding service in beta, and it is more limited compared to other NLU services from Microsoft/others.

While tooling is important, the market for low level tools is much smaller than for the services built using those tools. The vast majority of businesses who could benefit from Machine learning don't have the expertise to run RNNs using Tensor Flow, but do have engineers who can integrate API's that leverage trained classifiers.


> they are way behind when it comes to Machine Learning services.

Umm, there's five separate managed services within the Google ML family: Vision (GA), Translate (GA), Natural Language (Beta), Speech (Beta), and Cloud ML (Alpha)

https://cloud.google.com/vision/ https://cloud.google.com/translate/ https://cloud.google.com/natural-language/ https://cloud.google.com/speech/ https://cloud.google.com/ml/


The market for services is several orders of magnitude less than the market for products that make use of ML. It's probably ok if they don't focus to heavily on the services side

Google makes heavy use of these tools internally to build a wide range of products (and enhance existing ones)


How do you see those two as being different?

In my mind, products that make use of ML are using ML services as the back end. An example would be the recent wave of bot companies: many are not rolling their own NLU system but rather leveraging services like wit.ai or Microsoft's LUIS.


Google has done a far better job of abstracting the underlying math of machine learning into vertical-oriented services like the CloudVision API and the Cloud Natural Language API.

In contrast, the AWS Machine Learning service provides a 100% vendor-locked interface to logistic regression and that's it. You can't even import or export models. They just hired Alex Smola to do something about that. We'll see what comes of that.


This really wasn't my intent. I sincerely apologize for communicating otherwise.

I identify as blind or low-viz and wouldn't want to take away from others who also identify as such or as disabled.

That line was meant to be a response to the fears that it seemed like my family and friends had about my diagnosis. Fears, that I wouldn't be able to remain independent or be successful. You know how loved ones generally fear the worst possible scenario. Anyway, wording it this was was terrible, and I'm sorry. I'll see if I can make an edit and rephrase things.


Thanks, I know it wasn't. Likewise, my intent wasn't to make you feel bad. I imagine this landscape is all very new to you, and recognize that it's probably lots to come to grips with and navigate. To that end, kudos for finding a problem and running with it. :)


I definitely agree with you that the last thing I would want readers to take away from this article, is that they don't have to worry about accessibility or universal design. Until we have better tools, we should be providing the best possible experience with the tools we do have.

I also agree that we shouldn't be creating separate experiences for the blind. I think it's generally acknowledged that they end up being worse than a combined interface, never getting the resources or new features that experiences for sighted users get.

Where we seem to disagree is on the roll that screen readers play in limiting the usability of technology. On the one hand they are amazing because they provide access to technology that would otherwise not exist. On the other hand, by virtue of the way they function-mapping a 2 dimensional visual experience into a one dimensional stream of audio-using a screen reader can only be so efficient.

This lack of usability puts access to technology beyond the reach of many who are less tech savvy than you or I, and given that the vast majority of people losing their vision in the US are the elderly, there are a lot of people who fall into that category. What's worse, the rate of vision loss is set to double as baby-boomer's age out.

I totally agree that the medical-model of Accessibility sucks, but I think Screen Readers fall into that category. They seek to adapt an experience designed for others to the needs of the disabled. Conversational interfaces have the potential to create a consumer quality experience, that by it's very nature is accessible (at least to the blind). And accessible by default is the best possible outcome.


It's interesting to read that you conceptualize screen readers as rendering a 2-D environment as audio. I'm a very visual/spatial person, but I've always conceptualized them as rendering a tree of GUI widgets, rather than a visual environment. I guess it's the difference between thinking of my desk as a visual collection of objects, and more as an object with an Arduino/RPI in the top drawer, papers and folders in the second, etc. Not saying either is wrong, just that maybe it's a matter of conceptualizing UIs as groups of collected and organized widgets, rather than as laid out on a map. I've come to enjoy developing with React because I can say "here's my workspace for a given task. It has a toolbar containing these related functions, these two loosely-related larger workspaces, etc." Then I let a visual designer come along after and make things look better. :)

Anyhow, I look forward to reading more about your SDK. Where can I learn more? I'm building an app that could benefit from a conversational UI on top of the traditional one and would be interested in reading up on what you offer, particularly as it's meant for blind users too.


You can check our SDK out at developer.conversantlabs.com. It's currently in a developer preview. Send me an email at chris@conversantlabs.com. It would be great to talk more. If our conversation in this thread is any indication, I think we'll have a pretty good discussion :)


I conceptualise them as a non visual means of surfacing an n-dimensional information architecture. But I'm just weird like that.

One thing screen readers are super good at is exposing shitty IA design, which is regrettably common.

That said, it cuts both ways. There is a public transport app in the UK (Traveline GB) that as a low viz (legally blind) user I find incredibly frustrating to use, but my no viz pals absolutely love.

In this case it seems the IA is there but the visual interface to it is worse than what voiceover exposes.

Accessibity is hard.


There are a lot of professional developers who are blind. They use accessible IDEs paired with a Screen reader. Here is a good description: http://stackoverflow.com/a/453758/319013

In the future, I think there is a lot of potential for pairing a Conversational UI and a natural language programming...language. Eve looks really promising, and could change the way that everyone writes software, and not just the blind: http://eve-lang.com/

Here is a demo: https://youtu.be/VZQoAKJPbh8?t=46m52s


Sorry for the let down. I feel her pain. We're working on it, and should have something for her soon. Email is definitely high on our list of tasks that need a conversational layer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: