Pretty sure it's impossible to calculate unique/group in O(n), unless either the data is pre-sorted, or your range is extremely limited (and problem says nothing about this), so it has to be at least O(nlog(n))
For K, I think the solution is to find unique first (nlog(n)), the for each unique value go over the list and find how many input values match (multiply by n). That's O(n^2 * log(n)), even worse than I wrote initially.
No, the k solution computes the unique elements and group independently. #:'=v[;1] uses group for the first column, and ?v[;1] uses unique for the second. (;) is list notation.
Your analysis of the algorithm you've described is also wrong: it would be O(n*log(n) + n^2), or O(n^2), because the second part loops over unique elements and not operations performed by unique.
In the random-access model, hash table construction/lookup is O(n) average case for n elements, if the elements have bounded size. More generally, it's linear in the total size of the input. If the strings are very long it can take a while to hash and compare them, but there's never a log(n) factor. Maybe you're confusing this with results from comparison sorting?
Both solutions are O(n) with the assumptions of random-access memory and bounded string size (or hash/comparison time), which I consider to be pretty standard.
For K, I think the solution is to find unique first (nlog(n)), the for each unique value go over the list and find how many input values match (multiply by n). That's O(n^2 * log(n)), even worse than I wrote initially.