Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
Two things that work really poorly on LinkedIn:1) Expert Answers - these are occasional writing prompts on technical questions, with random people having taken up the challenge and writing their often misinformed answers in exchange for a little more career exposure. Can you offer a correction? No, you cannot, you can just sigh to yourself knowing other people are reading misleading content.2) Group Forums - These might've been great if more care had gone into getting the right people to create and moderate these, and in nudging people the right way to actively moderate them. Unfortunately, at least with the few forums I used to be part of (left all of them recently), either there is no moderation whatsoever so people post on completely unrelated things, the forum owner does nothing but pin one of their own posts at the top for some free self promo while doing nothing else, or both. And there's a certain class of people working for some marketing company that will only post links to some pseudo-educational site that wants you to pay for its lousy content. So they're basically useless, even if it might seem like a "Databases Group" or a "Particular Programming Language Group" on LinkedIn should be useful.LinkedIn as a whole isn't useless, but it has a lot of flaws and the above two would be really to fix, if they cared and could spare some time.
8
1 Comment
Sourabh Bagrecha
Senior Developer Advocate at MongoDB | ex Postman | GSoC Mentor | C100DEV
10mo
- Report this comment
Bang on!
1Reaction
To view or add a comment, sign in
More Relevant Posts
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
We've recently seen a number of companies that took advantage of an opensource license and community comfort with such licenses to get widespread adoption, then to dump that license. If you care about open licenses, it's usually best to avoid such companies/products; either find a fork or clone with a really open license, or even consider moving to some other closed-source product that never gained unwarranted spread from its earlier open period.Meaning, avoid Terraform (use OpenTofu), MongoDB (you have lots of options, and there always have been plenty of things to dislike about it apart from the license), co*ckroachDB (usually Postgres is the right choice here), and Redis (Valkey), among others.It's great to see some good news on this front with ElasticSearch, which came back to opensource; it's good to forgive companies if they do this, but only if.Licenses are one of those areas where unfortunately we can't avoid mixing a bit of politics into our work life (which generally we should try pretty hard to avoid). Fortunately, they're a little island of (mostly technical) opinions, meaning you're unlikely to be ruling out working productively with large portions of prospective coworkers by having and expressing a view.
2
Like CommentTo view or add a comment, sign in
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
It's good to see Cornell and Harvard expressing an intent to avoid making ex cathedra statements on politics or social issues; this is healthy for academia and would also be healthy to see in the private sector. An academic institution doesn't need a foreign policy, and when an employer goes quiet on politics and social issues, it gives employees more comfort in expressing their own views (whether and to whatever degree they may agree or disagree with whatever those tempting-to-make statements might be). It's healthier yet when something like the Chicago Principles of Free Speech are adopted.The idea, ideally, is not that workplaces become charged with politics or social issues. Instead, making it clear there's no institutional stance leaves room for the natural process of pushback whenever someone strident wants to push their views and quiet those of others, wherever on the political spectrum that person's views are; when people get pushback, they usually retreat to the sensible truce adults have on these matters - people are in a workplace to get work done, and topics are broached or un-broached in smaller circles when smaller sets of people negotiate comfort with crossing boundaries on faith, politics, sex, and other standard smaller-group topics.Getting institutions out of the way restores the normal and healthy status quo. Hopefully more academic institutions take their missions seriously enough that they won't let social values be a distraction from it.
3
Like CommentTo view or add a comment, sign in
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
Kudos to LinkedIn for letting us opt-out of "the algorithm" for our feed and just see content from people we follow, in chronological order. You go into your settings, set it there, and it sticks.So many big tech companies are really bad on letting users customise basic things (Google won't let you opt out of shorts on Youtube, a lot of social media might let you switch to follows-only but only until you reload the page). It's nice to see one company do the right thing for a change on this front.
3
Like CommentTo view or add a comment, sign in
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
So far my strategy with regards to LLMs is to be wait-and-see-and-play-around-a-little.At work we've been doing some ML for awhile on some projects I'm on and it's working well (but is high-effort), and I know enough specifics not to treat or think of all these technologies as being the same thing (but not enough to code anything nontrivial myself - to me they're mostly black boxen).I'm not keen on changing how I code (yet) to bring LLMs; I'm old-fashioned enough to be a vim user (IDEs annoy me). LLMs have made it easier to help people with their papers (they're very good at tuning up phrasing if you don't mind them occasionally breaking the semantics). I'm still unimpressed with their responses compared with what I can learn from doing 10 minutes of spot research on something (doing spot research is an important life skill; I have a lot of thoughts on it I should write up).So far I get most use out of LLMs and general generative models as entertainment; prompting them in the right ways can produce amusing responses that are more sophisticated than older generative code (although as a former angband player/developer and someone who has played Minecraft, I can say that a well-crafted old-style generative system can give you lots of novel and fulfilling adventures; bringing machine learning in should make it better yet if intelligently done).I'm not afraid of these technologies, nor protective of creatives, nor keen to dismiss them. I think they're good tools with a lot of potential. We should just approach learning their limits with a mix of interest and sobriety, like someone encouraging a relative with a new hobby they might or might not go pro with.
3
Like CommentTo view or add a comment, sign in
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
Meta Llama-3 is still not opensource; it has a license that does not qualify, being defective in its terms:1.b.4 - Requires people to sign on to an acceptable use policy, restricting its use1.b.5 - Bars use of Llama-3 to train other models2 - Can limit large-scale commercial useLlama-3 is source-available with a restrictive license.Whenever you see someone calling it opensource, consider correcting them.
15
6 Comments
Like CommentTo view or add a comment, sign in
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
Like CommentTo view or add a comment, sign in
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
It's important to be a bit of a pest and continually point out that Google and Facebook are not actually releasing their LLMs under open licenses; whenever they claim that honor, we should jump in and be that person who reminds everyone that that's not actually the case.Whenever a company claims to be doing this, look at the license. If they say they're open and they're not, they deserve bad press. Whenever a company backs off from an open license (as some database companies have done), or is even a bad member of the opensource community (like RedHat), they deserve bad press and we should give it to them.Facebook's Llama models are not open. You can get the source, but you're not allowed to use model outputs to train other models, and there's a usage limit (a high one, but if the term is there, it's not open). Google's recent small LLM releases are not open. They want the ability to rewrite license terms, and want you to commit to only use the most recent version of their model, and they have a bunch of "ethical" limits on what you can do with the thing. There will be more companies with source-available-but-not-open licenses.The LLMs really worth celebrating and getting excited about will be BSD-licensed, or MIT-licensed, or possibly GPL-licensed. Not this stuff.Always read the license, be picky, and raise a fuss over false claims of open.
6
Like CommentTo view or add a comment, sign in
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
1
Like CommentTo view or add a comment, sign in
-
Pat Gunn
Systems Programmer, occasional neuroscientist
- Report this post
One thing LinkedIn and Google and most other companies could do to be better to users is to take more of a risk on adoption of new features and give users an opt-out for distinct kinds of content.Google shorts and LinkedIn expert answers are two features that were introduced semi-recently. The former provides an alternate interface for very short videos (under a minute?) and highlights them when browsing one's feed on Youtube. The latter provides advice from fame-seeking often-clueless people on various topics. When the companies wanted to introduce these features, they obviously wanted the features to be successful, and so they jammed the interface for them into people's feeds. This isn't intrinsically bad - companies should keep trying new things - the problem is that they didn't give users a way to opt out. A nice "never suggest these" button or dropdown would've been great, even if it depressed user metrics. It'd be respectful of the principle of giving users control over what they see, rather than just tossing things to some algorithm.A small percentage of us who are comfortable with such things might use userCSS or some extention to hide these things if we don't want to see them, but I'd feel weird recommending that to most other people and I have to keep in mind that it will likely break someday and need some tweaking (like ad-blockers, which I also use extensively). It'd be better if companies just gave us the option to not see unwanted categories of content (unless they're financially dependent on those categories, like ads, where I can understand why they won't do that).
1
Like CommentTo view or add a comment, sign in
488 followers
- 29 Posts
- 33 Articles
View Profile
FollowMore from this author
- The need for better security screening tools Pat Gunn 5mo
- Open Doors To Support Pat Gunn 7mo
- IBM has Lost the Plot on Linux Pat Gunn 1y
Explore topics
- Sales
- Marketing
- IT Services
- Business Administration
- HR Management
- Engineering
- Soft Skills
- See All