An abstraction is the simplification of… Things. We use abstractions to make our computer usage easier (way easier). We might use libjpg and libpng to load images, or OpenGL and Vulkan to work on a 3D plane. Abusing abstractions is when someone uses a high-level abstraction to perform a single low-level function, or making something so abstract that it simply crashes your computer.
Lower-level abstractions can be extremely helpful, and are, in fact, healthy, since they can carry out complex tasks efficiently. "Abstraction abuse" means programs are created at such a high-level that they become large and inefficient. Other titles I considered for this post include "Arbitrary Abstraction", "Toxic Abstraction", and "When Abstraction Becomes a Design Fallacy".
Over abstracting, or making a tool to be used at a very high level, can create an awkward and narrow workflow and result in inefficient "low-level mimicking". Here's a non-computer related example: A square is a rectangle with even sides. A rectangle is made up of 4 connecting lines. A line is made of two points on a 2D plane, and so on.
If your high-level documentation doesn't tell users what a line is (or possibly, your entire program doesn't allow users to make a line), and they need a line, they'll likely just make a long, narrow rectangle, or stack a bunch of small squares next to one another! If you think about this "line" as something read by a computer, you can bet that it will take longer to draw.
As you may have guessed, Electron is not for efficient programmers. It's for programmers who want to ship out a product across multiple platforms fast. I won't say using Electron is not a good business tactic, but I can say that it will, almost definitely, provide users with a sub-par experience.
Can Electron programs output to stdio? I have never tried it. I know, I should try a tool before judging it.
The Implementation Issue
One side of Abstraction abuse is simply the refusal of using someone else's abstractions at all, and opting to write your own, because you can only truly trust yourself! I can understand this from a hobbyist point of view, since it's always fun to implement your own version of the fundamental algorithms, but these "home-grown" algorithms are often both bigger and less efficient than standard, common libraries.
I kind of want to mention Greenspun's 10 Rule of Programming here:
Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Although this is just rhetorical Common Lisp propaganda, it holds the same idea of poor algorithm implementation.
(And no, there are no 9 other rules. I don't know why, either.)
Poorly implemented or obsessive abstraction can lead to heavier programs. Dynamic-linking has partially solved this problem, but it's still a huge pain when it comes to things like compiling and sandboxed package management. One main problem I come across with dynamically-linked programs is when the program uses a strange, non-standard library that isn't in any mainstream package manager (and if it is, it's a hundred years newer than the specified version & is no longer backwards-compatible at all). Another problem is when a program uses a library that is simply just a set of abstractions for, say GTK, despite GTK being a decently high-level library. The programmer may not use every function in this hypothetical GTK abstraction library, which can lead to inflating your disk space usage (of course, it depends on how big the library really is)
A good example of this section is when someone wants to write a program that does nothing but output a green square on a window, so they use a video game engine and draw a square, then ship the entire engine with their program. An optimized version is a program that just uses libGL and GLUT.