That brings back memories. They were definitely popular. In the early 2000s, I worked at a small company and one coworker had a bunch of Dilbert strips all over one of her cubicle walls. It wasn't an insane amount, but her cube was on the way to the break room, so it was visible to everyone passing by. Apparently the owners of the company did not like that and had her take them down.
It's really a great way to train your ear, and fun, too. I can play ukulele, but mostly just to strum and play songs to, but a few years ago I just started picking notes to try to recreate melodies of songs I knew or heard recently. At first it was slow-going with lots of searching on the fret for the right note, but over time I worked up the skill to mostly get the melody on the first few tries. It was the most amazing feeling to realize I could listen to a song and then reproduce it by ear.
I found that it's also an excellent way to "feel" the structure of a melody as well since you're essentially building it up again. Of course you could read music to see the actual melody, but working it out this way feels a bit more intimate.
My disconnect is I can't read sheet music. So I can hear it, then memorize where it is on the piano/keyboard... but that just teaches you play piano by ear. It doesn't teach you how to play music in the traditional sense.
I guess this showing you the sheet music as you find the notes can help with that, but as others noted - I'd like a "mess around" mode, before a "test" mode.
I have a great ear and am terrible at reading sheet music. Fine if you aspire to be a rock guitarist. Not so fine if you aspire to be a classical pianist.
Funny, but I'm final tired of my poor sight reading and have set a goal for 2025 to average one hour of piano playing from sheet music per day.
And I agree...a "mess around" mode on the app would be great. Feels almost punitive when I make a mistake.
If you're looking to improve your sight-reading and don’t mind playing church music, I highly recommend picking up a second-hand copy of an old Episcopal Church hymnal (I like the 1940 edition). All the pieces are four-voice and the rhythms are relatively simple, so you can concentrate on sight reading. Good luck!
Sounds like a good idea. Right now, I'm splitting my time between drill based content (Bartok, Gurlitt, Kunz, etc), beginner classical pieces, and the occasional blues.
Funny, but in church, I spend more time than maybe I should sight reading hymns during the sermon. :)
It's a very cool little game! One suggestion: could you make it so you can noodle around on the keyboard without submitting the answer and then once you've worked it out, have a submission mode? Right now, it's frustrating that if you enter a wrong note, it shows a message, so you can't experiment on the keyboard to try to work it out.
Hey mdnahas - sorry to hear that. I still haven't figured out what is going on with some versions of iPhones.
These worked:
- iPhone 16e iOS 18.3
- iPhone 13 Pro Max iOS 18.0
- iPhone 15 Pro Max iOS 17.5
- iPhone 13 iOS 16.3
These failed to produce sound:
- iPhone 15 Pro iOS 17.1
- iPhone SE 2022 iOS 15.4
I'm baffled - there doesn't appear to be any consistency around this bug. As others have pointed out (though I doubt it) - you might see if you have the hardware mute, Do not disturb, silence mode, battery savings, etc. that might be causing it not to produce sound.
I'll add your version into the list of iPhones that is having issues. Thanks for the feedback!
Having both modes could be good. Default allows pre-noodling and enabled gives satisfaction of being correct one-shot (or falls back to retry with noodle).
I find that I use it on isolated changes where Claude doesn’t really need to access a ton of files to figure out what to do and I can easily use it without hitting limits. The only time I hit the 4-5 hour limit is when I’m going nuts on a prototype idea and vibe coding absolutely everything, and usually when I hit the limit, I’m pretty mentally spent anyway so I use it as a sign to go do something else. I suppose everyone has different styles and different codebases, but for me I can pretty easily stay under the limit without that it’s hard to justify $100 or $200 a month.
That's not face recognition. That's face detection. It just detects faces and sticks a label from a pre-selected list. Come on, this doesn't even pass the basic smell test. "Facial recognition" my ass. It doesn't recognize anyone. I could build this in a cave with scraps. There's a huge difference between the two: recognition means you have found a known person, detection means you found a person.
That's about the difference between eating sodium chloride and eating sodium.
This kind of privacy slop is overly popular in tech circles. Each participant just posts uninformed garbage and then they link to each other with “citations” for sources that are wholly made up. It’s really reducing the quality of information on this website that it’s now full of junior engineers and interns.
Those guys always obsess over CVEs and privacy and they’re always wrong about everything but have learned to mimic the language of people who know stuff. “There’s some evidence” / “here’s a source”. Ugh. Can’t stand it.
I wrote my own flashcard app and had a very basic import from Anki feature and I have to admit that I underestimated how Anki handles it. My first attempt at import was very naive and sort "flattened" the imported data into simple front/back content. It lost a lot of fidelity from the original Anki data.
After investigating the way Anki represents its flashcards a bit more, I can really appreciate the way Anki uses notes, models, and templates to essentially create "virtual cards" (my term).
I suspect other people creating their own flashcard apps underestimate the data model Anki uses and have a hard time matching their own data model with Anki's, which may be why decent import options are hard to find. If someone wants to support Anki deck import, they have to essentially use the same data model to represent notes and models (plus cloze deletions). I'm now adopting Anki's model for my flashcard app for better import fidelity.
Regarding the SQLite data format, I was thinking it would be great if there were a text-based format instead for defining the deck and its contents as that would make it much easier to collaborate on shared decks on GitHub, like you suggest. It would be great to have a community work on essential flashcard decks together in an open format that encourages branching and collaboration. I know some groups do this with Anki decks, but I can't imagine the SQLite file format makes it easy to collaborate.
I don't think it would be that hard to come up with a universal text file-based format for a flashcard deck that supports notes, models, templates, and assets. For instance, we could have each note placed in its own text file and have the filename encode the a unique ID of that particular note. Having unique identities for everything would make it easier to re-import updated decks to apply new updates if you had previously imported the deck. The note files could also be organized into sub-folders to make it easier to organize groups of info that should be learned together.
I think a lot of times, people are here just to have a conversation. I wouldn't go so far as to say someone who is pontificating and could have done a web search to verify their thoughts and opinions is being lazy.
This might be a case of just different standards for communication here. One person might want the absolute facts and assumes everyone posting should do their due diligence to verify everything they say, but others are okay with just shooting the shit (to varying degrees).
I've seen this happen too. People will comment and say in the comment that they can't remember something when they could have easily refound that information with chatgpt or google.
Exactly, it was the first thing you'd do when you launched Word. Nowadays, the only option available would be "See less of Clippy" and he'd be back in the next session.
[Remind me again in an hour] [Remind me again in 15 minutes] [Changed my mind, keep him]
May everyone who makes such dialogues be afflicted with severe depression and be forced to ruminate at night about how empty they feel despite their "good" job and high salary.
I reckon it would be more like "Pay subscription to see slightly less of Clippy" with some small print explaining that "less" is relative to other people's future experience, not your current one.
People think AGI is far away, but I don't think HN commenters have this awareness:
> Cue 200 comments alternating armchair Descartes and pop neuroscience, then a top post linking a blog from 2011 that “settles it,” and a mod quietly locks tomorrow.
I don't think this is a serious test. It's just an art piece to contrast different LLMs taking on the same task, and against themselves since it updates every minute. One minute one of the results was really good for me and the next minute it was very, very bad.
reply