It's not that difficult concepts are easier in visual form. It's that concepts are verbosely described in visual form. Verbosity puts a ceiling on abstraction and makes things explicit, which is why things seem simple for people to whom everything is new (experts, on the other hand, find it harder to see the wood for the collection of tall leafy deciduous plants).
When you need abstraction, you need to compress your representation. You need to replace something big with something small. You replace a big diagram with a reference. It has a name.
Gradually you reinvent symbolic representation in your visual domain.
Visual programming excels in domains that don't need that level of abstraction, that can live with the complexity ceiling. The biggest win is when the output is visual, when you can get WYSIWYG synergies to shorten the conceptual distance between the visual program and the execution.
Visual programming is at its worst when you need to programmatically maniplate the construction of programs. What's the use of building all your database tables in a GUI when you need to construct tables programmatically - you can't script your GUI to do that, you need to learn a new paradigm, whether it's SQL DDL or some more structured construction. So now you've doubled what you need to know, and things aren't so simple.
We had a bake-off once, once you have the instrument control DLL’s and link them into Matlab, you can code up a new experiment in Matlab in about 10% of the time it takes to do the same thing in LabView. And from there, you can do pretty much anything that needs doing in terms of data work-up. I’m sure these days you could do the same in Python, which has become the new science and engineering language.
1) takes up a lot of space
2) each subroutine (sub-VI) has a window (in the case of LabVIEW, two windows), so you rapidly get windows spewed all over your screen. Maybe they've improved that?
3) debugging is a pain. LabVIEW's trace is lovely if you have a simple mathematical function or something, but the animation is slow and it's not easy to check why the value at iteration 1582 is incorrect. Nor can you print anything out, so you end up putting an debugging array output on the front panel and scrolling through it.
4) debugging more than about three levels deep is painful: it's slow and you're constantly moving between windows as you step through, and there's no good way to figure out why the 20th value in the leaf node's array is wrong on the 15th iteration, and you still can't print anything, but you can't use an output array, either, because it's a sub-VI and it's going to take forever to step through 15 calls through the hierarchy.
5) It gets challenging to think of good icons for each subroutine
6) If you have any desire for aesthetics, you'll be spending lots of time moving wires around.
7) Heavy math is a pain visually (so you use the math node that has it's own mini-language, and then you're back to text)
To my mind, this is analogous to textual writing itself vs drawing where text is an excellent way to represent dense, concise information.
He demonstrates some good examples.
I’m not quite sure it’s what you mean by visual, but it seems obviously valuable and tools that enabled some of what he shows would be really useful.
I’m not sure why we don’t have these things - I think Alan Kay would say it’s because we stopped doing real research to improve what our tools can be. We’re just relying on old paradigms (I just watched this earlier today: https://www.youtube.com/watch?v=NdSD07U5uBs)
There are some really amazing tutorials and examples here: https://youtu.be/wubew8E4rZg
[2] https://derivative.ca/community-post/made-love-touchdesigner...
[3] https://derivative.ca/community-post/making-go-robots-intera...
Where I work (structural engineering firm) some of the engineers do use it for general purpose programming. I see the appeal of it, but keeping the layout tidy and all the clicking is just too much effort. However for generating geometry it's quite a useful tool.
You say visual programming seems to unlock a ton of value. What can you do with a visual language that is much easier than text? Difficult concepts might be easier to understand once there is visual representation, but that does not imply creating the visual representation is easier. And why should pictures be more approachable than text? People might understand pictures before they can read, but we still teach everyone to read.
Labview does have a visual-diff tool, but when I was using labview regularly on a complex project, no-one used the diff system. They just checked out the entire file and compared it visually to another version.
Another thing: you can’t ctrl-f for control-flow structures. You end up mousing around for everything.
Another problem, all major graphical languages I’ve used are proprietary (labview, simulink, Losant workflow system).
a) it's amenable to version-control;
b) you can type/edit a program with the keyboard without the need for drag-drop;
c) the auto-layout approach avoids readability issues with too much "artistic freedom" for freestyle diagramming languages like LabView;
d) because its mostly a pure algebraic language w/ type inference there is little complicated syntax to juggle;
e) the IDE is actually faster even though Bonsai is compiled because there is no text parsing involved, so you are editing the abstract syntax tree directly
It definitely has its fair share of scalability problems, but most of them seem to be issues with the IDE, rather than the approach to the language itself. I've never probed the programming community about it, so would be curious to hear any feedback about how bad it is.
But... this can be improved with good tooling and clear abstraction layers. Some of the problems with visual systems are caused by poor navigation and representation systems. Making/breaking links and handling large systems in small windows makes for a painful level of overhead.
This also applies to text - jumping between files isn't fun - but visual representations are less dense than text, so you have to do more level and file hopping.
If someone invented a more tactile way of making/breaking links and you could develop systems on very large screens (or their VR equivalent) so you could see far more of the system at once, hybrid systems - with a visual representation of the architecture, and small-view text editing of details - might well be more productive than anything we have today.
(1)--[2]-->(3)
(2)--[3]-->(4)
There is no good way to draw this visually without losing the ability to be able to draw all possible programs. The grammar of both natural and programming languages use various strategies to get around this, and they are actually very efficient at it (from an information theoretic point-of-view).Thats not to say one can't improve on the current culture of writing programs in what is basically ASCII text, but visual programming is just not powerful enough to describe arbitrary computations.
Its actively designed to both prevent several typical problems of visual programming:
First, the overlapping wire mess getting out of hand. It doesn't let you overlap wires, if your program gets this complicated it forces you to break it down into a smaller unit. It takes some adjustment, but you get benefits from it.
Second, the "cant use it for general purpose stuff" problem is solved by the fact DRAKON can output to multiple different programming languages, meaning you can use it to build libraries and other components where you want without forcing you to use it for everything.
Let's say you whip up something cool using visual programming, and then you have a business requirement that requires something you can't easily squeeze into your CRUD app: maybe a database join, a query that doesn't cleanly fit into your access patterns, or you just wanna make a certain thing faster.
Then you design a scripting console, and now you have something that lets you build custom solutions. Well, at that point you're basically implementing non-visual programming. And at a certain point you reach the limits of what you can script in a hacky way. or you become more comfortable with the console than they UI, and you just chuck the visual programming altogether.
As I'm writing this, I'm thinking that I actually do visual programming, except I'm doing IN MY MIND. Who needs a body brain interface when the goal of using your hands is to get it into your brain?, but it's already in my brain?! Well, it'd be nice to get some stuff out of my brain cuz cognitive load. And as much as I'd like to develop tech that makes it easier to get stuff out of my head, I'm mired down in trying to get the latest feature from the product team to work at all :)
My company used Azure ML Studio, which is a great program for making quick ML predictions. But making any kind of reasonably complicated data processing pipeline takes a lot of effort. I switched to writing code to process and run my predictions and my life became much simpler.
Language is extremely expressive and you can pack a lot of meaning into a small space.
And you're required to go into code / the "source" representation or deep "configuration" of the visual elements, which just takes 10x longer than writing code in the first place, suddenly the last mile takes months to get right.
Visual scripting is growing but it’s better for some things than others
That aside, I think most of us actually code in visual programming style, but all the "visuals" are constructed in our head on the fly as we read the code text. So how good you are at coding maybe a function of how well you can represent these structures and how long you can maintain them in your head. Maybe an external tool that does it for us produces a representation doesn't mesh well with the internal representation for programmers experienced in text based programming.
> Luna is a data processing and visualization environment built on a principle that people need an immediate connection to what they are building. It provides an ever-growing library of highly tailored, domain specific components and an extensible framework for building new ones.
Another issue: it's hard to pretty print visual programs or to put them in any sort of canonical form. This makes it harder to read them--there's no way to enforce a consistent style. It also makes various processing tasks (e.g. diffing, merging) much harder.
I think this is one of the draws of spreadsheets for simple "programs" and non-programmers. And of course, spreadsheets are ubiquitous.
On phones, the difference shrinks since typing relies on just two thumbs with no fixed position and no physical keys—while onscreen manipulation gets more immediate compared to a mouse. However, phones suffer from short supply of screen area where you pick and place the blocks. In my experience, it would still make sense to choose blocks with typing (by filtering in real-time from the entire list), and to minimize the work of placement—like in Scratch or Tasker.
Visual programming might have distinct value if it allowed to manipulate higher-level concepts compared to text-based coding. However, it turns out that the current building blocks of code work pretty well in providing both conciseness and flexibility, so visual blocks tend to just clone them. Again, the situation is better is you can ride on some APIs specific to your problem domain—like movement commands in Scratch or phone automation in Tasker and ‘Automate’. Similarly, laying out higher-level concepts like in UML or database diagrams has its own benefit by making the structure and connections more prominent.
Using labVIEW over C did have some benefits. It seemed like streamlined concurrency is a major advantage.
Can you start with using an WYSIWYG HTML editor and do a really good webpage to see the benefits and drawbacks?
Aside from that, there is the issue of tooling (source control, etc), editing large blocks, etc. which the visual languages I've used are not great at.
But it should be recognized that some things are better visually and some things are better textually. Typing "a = b + c" is way simpler than dragging a bunch of blocks around to describe the same thing. But visual tools are superior for understanding relationships - a connects to b, which connects to c makes a lot more sense when you see it as "[a] -> [b] -> [c]", and an ascii diagram like that quickly becomes unwieldy while graphical boxes still work.
I find an interesting comparison between drawing diagrams with a diagramming tool (e.g., Lucidchart) vs with a textual description language (PlantUML). I find the textual language far easier to use to quickly produce diagrams, but LucidChart is superior for tweaking the exact dimensions and alignments of things.
All of which is to say, both approaches have cases where they work better, and others not so much.
Note I'm not talking about class diagrams, I want to see a flow chart of the actual imperative code (for loops, if/then, etc...) of an existing popular text-based programming language.
When you add some constraints in, like for example, the limitations of a spreadsheet, visual programming can work exceedingly well. It works great for these domain specific usages. But honestly, a text document of textual statements is a pretty good way to represent a general purpose programming language made up of procedural statements. You could make a UI like Scratch for other programming languages, but:
- the interface would be cluttered and likely not nearly as efficient as just typing
- other than virtually eliminating syntax errors its unclear what you are accomplishing - its not easier to break down problems or think procedurally.
- You could probably get similar benefits with a hybrid approach, like an editor that is aware of the language AST and works in tokens.
So my view is that visual programming is perfectly mainstream and just has not been demonstrated to have substantial benefits for typical general purpose programming languages.
It does not end well. The results are not pretty. Stick to text representation of any control flows.
I've been trying to implement something with Power Automate, and presumably that's "mainstream", but it strikes me as falling into the classic pattern of appealing to buyers rather than users. I feel 10-100 times less productive than with, say, VBA, for no advantage.
One thing that is particularly frustrating to me is that it's so slow and buggy I am afraid of losing my work at any moment. You can't save your work unless it passes preliminary validation, but sometimes reopening it makes it invalid by removing quotes or whatever. Copying something out and pasting it back often fails to validate too, as the transformations are not inverses like they should be. Sometimes it just gets corrupted entirely. I'm not aware of any way to manage versions, or undo beyond a few trivial actions.
But the more fundamental reason I hate this is because it seems not to be designed to let you take a chunk of logic and use it in a modular way. At least this style of "visual programming" seems to apply the disadvantages of physically building things out of blocks, where it's entirely unnecessary. You've got some chain of actions A->B->C, but the stuff inside those actions is on a different level; you can't take that chunk of stuff and use it as a tool to do more sophisticated things. As far as I can tell. I keep thinking "it can't be as simplistic as it seems" and thinking I'm about to find a way to create general functions.
Compromises where made, quick-fixes on quick-fixes made text interfaces just usable enough, sunk costs grew and habits formed. The visual programming I see in game-engines now are carrying those habits with it, because to build a language of nodes you first have to learn the ways of ASCII code.
And from what I understand, hardware is optimised for what ever software is popular enough to sell, so even if the software changed, the hardware would take longer. It takes an awesome goal to justify starting over on a truly visual interaction path when there is a system that almost, kinda works. And what-ifs are not in budget.
Lots of examples on Progopedia: http://progopedia.com/version/sanscript-2.2/
It turned out, people weren't really interested. However people were interested in the diagramming library created to make the language, so by virtue of already having thought really hard about what goes into good diagramming tools, my company started selling that. Girst as C++, then Java, then .NET, now as a JavaScript and TypeScript library called GoJS: https://gojs.net (Go = Graphical Object).
It's one of those ideas I have no time to implement sadly, at least for now.
That said - working with data is an area that lends itself well to visual programming. Data pipelines don’t have branching control flow and So you’ll see some really successful companies in this space.
Alteryx has a $8b market cap. Excel is visual programming as well.
I think another issue is that it's costly to create a visual language, discouraging experimentation with new languages. With a text based language, all of the editing tools are already there -- a text editor -- on any decent computer. You can focus on refining your language, and getting it out there for others to try.
http://vis.cs.brown.edu/docs/pdf/Upson-1989-AVS.pdf
The idea spawned many imitators (VTK, IBM DX, SGI Iris Explorer). The product was spun out of Stellar shortly afterwards, and the company is still in existence:
Another way to look at it is the 7 +/- 2 rule of short-term memory attention -- when you look at something and try to "grok" it (a gestalt experience) you really need a limited amount of information in your visual field. To do this you need to move to higher and higher levels of abstraction, which is a linguistic operation - assigning meaning to symbols. Even in visual programming, you end up with a box that has a text field that holds the name of the exact operation - so you may as well cut out the middleman and stick with the linguistic abstractions.
Now, if a program is "embarrassingly visual" -- dataflow operations in signal processing, etc., the visual DSLs do seem appropriate.
Part of learning to program is to learn work with abstractions especially if you have never been really exposed to something else similar (mathematics, physics, engineering, etc.) Things go out of sight but you still need to train your brain to manage these things.
This is a bit like playing chess. Good experienced player will be able to plan long in advance because his brain has been trained to spot and ignore irrelevant moves efficiently. If you imagine training that would let the player learn recognize good moves but not learn how to efficiently search solution space, you would be training brute force computer that would not be very good player.
I think visual programming is a different thing from regular programming. I think compromises like Logo https://en.wikipedia.org/wiki/Logo_(programming_language) are much better teaching tools. You still program a language with syntax but the syntax is very simplified and the results (but not the program) are given in a graphical form that lets you relatively easily understand and reason about your program.
It's useful to allow more "citizen devs" (regular folk with little exposure) to come up with prototype high level proof of concept apps,including UX design. It is a big deal in the corporate arenas I've had exposure too,but I think widespread adoption is still years away.
You will always need non-visual languages to do things in a featureful and scalable way.
Visual programming works very well for data flow problems.
I used it to easily connect to a piece of lab equipment, reset it, set whatever settings I want, run a test, and then log the output to a file. I could setup a test then walk away and return to data. Doing the tests manually would take many months.
Both have labels as remarks/comments and you can easily put in a switch statement to test new code or use highlighting to see exactly where the program is running albeit slowly.
One of the fun things to do was circle a repetitive task then turn it into a function. A large program requires a large screen to see it all. Widescreens are terrible for it.
After basic settings and availability in libraries, it is better to move to a text language. Visual programming is a quick and dirty solution.
I suspect visual programming is more common than we realize though. I had an acquaintance at Workday who claimed a lot of work was done there in a visual programming language.
Also, arguably website/app builders are a visual programming "language" and they are extremely common.
Also it is too verbose. The 'functions' take away a lot of screen estate (this is most obvious for mathematical stuff). If you start on just a bit more than something you could have achieved in 50 LoC, it tends to get really messy.
VP lacks referencing as well. At least most of those envs I know. Declared a variable? Too bad, you have to connect this node to everywhere you need it. Sigh.
Reusing of components is possible, and for some envs it is implemented, but mostly just per file, not in general, e.g. you can make a custom component inside one project, and if you edit it, all instances get updated, but you can't save that as a standalone component you can XREF in other files/projects, which makes it hard to make a custom library of functions.
But it depends on the industry. As others already mentioned, the more an industry is led by visuals in the first place, the more common it is to actually utilize Visual Programming (or rather Scripting), also it's quite useful in real time contexts – which is the place where the strengths are. The CG industry is one of those, but also architecture and design in general (think of McNeel's Rhinoceros with Grasshopper, THE most used visual scripting environment used as of today, especially in a professional setting).
Conclusion: VP has its merits and is used extensively, just maybe not the places you expected/hoped for.
His programs ended up with state distributed all over the place and an impossible to keep track of control flow.
Around the same time, I was intrigued by articles promising easy visual construction of programs; it seemed to be in vogue then. I took me several years to realize that the nice examples in journal articles were just that, nice examples. Visual programming is appealing, like flowcharts to my friend, but they suffer from lack of good support for building abstractions and an inefficient method of building visual programs and keeping track of changes.
The thing is that it's extra effort for the author to draw pictures and think of image layout for what is ultimately the manipulation of symbols. If you're a proficient touch typist with a powerful editor, including macros, jump to definition, symbol search, multi cursor editing, you can spend a few minutes at a time without your hands ever moving far from the home row, let alone touching the mouse. We have very powerful tools for text manipulation and the highest bandwidth input device we have is a keyboard, so it's not at all surprising to me that text is still king for programming and probably will be until we have some kind of direct brain -> machine interface.
I tried a Scratch-like for android and did the first couple of days of Advent of Code a couple years ago. It was tyring (too many instructions to drag), midly infuriating (when something didn't fall where it should), hard to refactor (when experimenting).
That's why that year I ended up transitioning to lisp, writing in a text editor and copying to a web-based lisp interpreter.
The local maxima I found was this last year with J's android client. With their terseness array languages can be used quite effectively with mobile constrains.
/(.*?)somefunc\((.*?)\) {\n( *?)return a + b;/$1somefunc($2) {\n$3return a * b;/
And then search and replaced across the 11 files. I have no idea how I'd do that with a visual programming environment. I actually needed to do that about 7 times with different regular expressions to do all the refactoring.Also did this yesterday. I `ls` some folder. It gives me a list of 15 .jpg files. I copy and paste those 15 filenames into my editor, then put a cursor on each line and transform each line into code and HTML (did both). Again, not sure I how I do that in a visual programming environment.
While visualisation is (sometimes) useful for grokking difficult concepts, writing a program is a completely different kettle of fish. I could draw some pictures that might convince you that you understand a Fourier transform, but you'd be no closer to being able to efficiently implement a Fourier transform in a computer.
Let me suggest two ways visual programming might be a big part of the future:
1. New paradigms, such as constraint based programming, might well lend themselves better to a visual presentation; and 2. VR. Visual programming is indeed much less visually dense than text, but if you start over with the assumption you’re doing it in VR, that suddenly is if anything a virtue.
Imagine something that was part Excel, part Access, with visual, animated 3D representations of other major programming abstractions also, and you start to see that VP might really be the future.
All content creation is difficult on mobile devices, other than passive content creation like shooting videos and taking pictures.
Visual programming will be way easier on a desktop machine with a proper mouse, and a real keyboard (for the UI shortcuts you will end up using).
But, about the main question, visual pogramming has no value beyond being friendly to newbies.
This is like asking why don't Fisher-Price plastic tools for kids take carpentry by storm. They are so light and easy to hold, why don't we frame houses with them?
Creating (new)/ destroying (existing) actors runtime is the hard part in programming, because the complexity dimension explodes via that. +1 thingy in the system means many new possible runtime flows - in theory, even if it's not a de facto new flow; you have to prove if it is and handle accordingly.
You can show it in a visualisation, but to be able to do that it must be an animation; time matters.
I think a higher language like Idris will be able to generate these animations from the code to make it easier to absorb existing codebases.
They are almost universal at large companies and have a large ecosystem of visually programmed/configured partner software companies and components.
I can't imagine being able to write maintainable, well tested, scalable software (cough, software engineering, cough) with some version of drag and drop. I'd love a visual element added for helping navigate code. I like system diagrams, flowcharts, etc. But I'd like these to be generated by my code, not generate my code. I feel like this would be trying to write a book with only emoji and/or gifs.
Visual programming with nodes/blocks is another abstraction for ideas. But blocks and nodes are much, much less flexible. So these abstractions have to be much more precise... Which leads to problems.
A good analogy is Lego vs clay.
With Lego; you can make anything with the right bricks. The problem is each brick is precisely crafted and you're limited to the blocks you have.
With Clay; you have the freedom to mold anything to whateber precision you need... But it might take you longer
In principle, though, nothing prevents you in theory from writing very complex programs using visual languages. It doesn't really make things simpler; once you reach that point, as said above it's just more efficient to combine words than drawing shapes with a mouse.
For examples, for building backend infrastructures take a look at https://www.mockless.com/.
They provide an easy to use interface where you can setup the data model and even complex functional flows. In the background the tool creates the source code and it's able to connect to your GIT repository and commit anytime you made some changes exactly like a developer would do it.
It gives some value, and sacrifies other values. So far tooling has not reched the point where the sacrified values are small enouhg to justifie the added values.
> Programming becomes more approachable to first-timers.
How is that relevant? In no industry should first-timers get any responsability. And textual code is easy enough to be grokh after a short while. The problems people have with textual code after that would still remain with visual code. So no value added.
I've been programming for about 37 years now and recently, not wanting to mess with Swift for that, built a "quick action" command (for Finder) that converts/shrinks HEIC images to .jpg suitable for e-mail. Took something like 2 minutes with nearly no experience using Automator. It's not a niche technology.
What sets it apart from previous versions of Scratch is that it can run in the browser. It makes much more accessible to a wider audience.
Its my opinion with this browser based interface and the growth of instructional videos, we will see visual programming become more mainstream.
Have you heard of Informatica PowerCenter? It creates a mapping instead of writing down SQL query. The problem is you must manage inconsistent interfaces, resize windows, writing down in small textboxes.
Of course it has its benefits, but in most cases it just doesn't help much in removing complexity but it adds its own.
Some difficult concepts can more easily be grokked in a visual form. I'm not sure that they all can. In fact, I suspect that it might be about even (as many easier in text as are easier in pictures). We just notice the ones that would be easier in pictures, because we're working in text.
* paredit/parinfer for Lisps are actually tree editors in disguise.
* DRAKON. Having put critical business logic in it, was really a boon for quick understanding after returning to the codebase later.
It can convert Visual to/from TypeScript/ JavaScript. And it works on mobile.
It's for games, but you'd think the technology would be applicable for any kind of program.
As such, I think Blender handled this very well. You can set up the material node graph both visually and through a scripting command line.
Using box and arrow diagrams for documentation can give you a lot of these benefits without needing to adopt a radically new paradigm.
Beyond that, for real engineering? I've worked with EEs who write massive applications in Labview -- their codebases are all impossible to maintain masses of pain and suffering.
As you said, Scratch is used in education for exactly this purpose. Visual programming fills the niche of high level scripting, when you have a system and want to script it in Excel style. On lower levels text is easier to deal with.
I bet there will be a breakthrough in interfaces (likely aided by AI) though that will make visual programming a lot more widespread
Beyond a small limit you'll start wanting abstraction - so good bye to seeing everything. Connections will become a mess, and moving between mouse+keyboard to keyboard is annoying.
https://blog.metaobject.com/2020/04/maybe-visual-programming...
A quick glance at source code reveals a ton of information, thanks to indentation and code blocks.
Why/Why not?
I am sure the first paragraph of that story as flow chart looks awesome, and then...
It's essentially returning to ideograms like ancient Chinese or Egyptian writing. Not a good idea.
I spent a few years programming mainly with Simulink, and more recently have experimented a little with some graph-based game engine UIs. (Unity shader graph etc..)
Now, as far as Simulink is concerned, I feel that it was (and possibly still is) an ergonomic disaster zone. There's just too much mousing around and adjusting routing between nodes. Also, merging is really difficult because of the save file format. This is a significant problem for many engineering organisations.
For any visual programming tool, the value of a visual graph diminishes as the graph becomes more complex. It is most valuable when it is kept simple and illustrates e.g. high level components only.
Now ... none of these problems is insurmountable, and having a relatively simple high-level graph to show the relationship between the major components of a system is an incredibly valuable communications tool -- but users do need to resist the temptation to use the graph for everything down to every if statement and for loop. These things are best used to explain the high level relationship between major system components and the overall flow of data through the system. Fine grained algorithms are better represented textually.
So, over the years I've developed a configuration-driven framework that's designed to (eventually) be driven from a visual UI.
Computationally, the framework implements a Kahn process network. I.e. it's a dataflow model, where the nodes represent sequential processes, and the edges have queue semantics. (The queue implementation can be replaced with implicitly synchronised shared memory in some situations, so it can be quite efficient).
This enables me to e.g. implement intelligent cacheing of data flows, support record-and-replay style debugging, co-simulation, and automatic component-test generation.
The nodes can be arbitrary unix-philosophy binaries, Python functions, or native shared libraries. (Eventually I plan to support deploying nodes to FPGA as well), and because of the Kahn process network semantics, behave in the same way irrespective of how the nodes are distributed across computers on the network. This makes it an ideal rapid-prototyping tool for quickly integrating existing software components together. E.g. throwing machine learning components together that have been written in different languages or using different frameworks. It's a bit like ROS in this regard.
The dataflow graph itself is represented by a set of YAML files (or data structures that can be generated/modified at runtime), with different aspects (connectivity, node definitions, layout) separated to make textual diffs and merges easy to understand, and enabling teams to work more effectively together.
Also, because the graph is explicitly represented as a design-time data structure rather than being a runtime construct, it's easy to use GraphViz to generate diagrams, allowing you to have documentation that's correct by definition without spending ages adjusting untidy connections and layouts. (Although I do want to spend some time improving the layout algorithms and providing some mechanism for layout hinting).
Right now, you can only generate the visual graph from the textual description. In future I want to make it possible to edit the graph visually, so you can (for example), drag and drop compute nodes onto different computers really easily; stop, rewind and replay history, and simulate the effects of e.g. network contention and experiment with moving nodes around the available hardware to optimise performance.
I'm also experimenting with hybrid symbolic/statistical AI techniques that use the data flow graph to help with fault finding, automatic safety-case generation etc.. etc..
https://hackernoon.com/on-kloudtrader-and-visual-programming...
It wasn't the most novel idea or anything but most existing systems were either clunky, expensive, or had UIs from the previous century. Ours was a hip SaaS inspired by Robin Hood, Bubble.is, and all the new Bloomberg terminal clones. We built an initial prototype using Google Blockly, did a ton of UI/UX research (studied every visual programming system from Sutherland's Sketchpad, to the Soviet DRAKON, to modern day MindStorms, Scratch, and the various PLC control systems and LabView derivatives) and slowly built out the rest of the algorithmic trading stack. It was tremendously difficult mainly because of lack of labor, our small startup only had 3 people. We were essentially translating by hand entire trading frameworks, backtesting tooling, and blotters into virtual Lego bricks. It was my first startup and I was inexperienced. We were fresh faced and between trying to raise funding, sales and marketing, product development, progress was slow. Patio11's (Kalzumeus) blog posts on Stockfighter were highly inspirational and we saw what they managed with only a small team and we tried to replicate. But between Patrick and the Ptaceks, they had several decades more engineering and business experience than us, something we completely discounted. The tooling around Google's visual programming system was like early Android development, works in theory but tremendously difficult to use. Microsoft's Makecode (which is also built on Blockly) had a magnitude more engineering manpower than us. Visual programming was not easy to build quickly — a production quality system wasn't something that you can clone in a weekend. We looked towards automation, around the same time, a code synthesis YC company called Optic appeared and we strongly considered leveraging them to allow us to build out faster.
https://news.ycombinator.com/item?id=17560059
However, a couple months later, YC funded a similar company called Mudrex who had a prettier UI and a founding team with a stronger fintech track record.
https://news.ycombinator.com/item?id=19347443
At that point we crossed the rubicon and pivoted to DevOps/PaaS, launching a Heroku-style product.
https://KloudTrader.com/Narwhal
Did the whole tour of Docker Swarm, Kubernetes, KVM etc. Built out our own cloud almost from scratch. Signed a contract with a broker to offer comission-free trading and everything. But it was a difficult product to sell in a crowded enterprise market with only a few (but big) customers and we were playing catch-up with companies like Alpaca, our product was being eaten from the corners by new features launched by companies like Quantopian and Quantconnect. Quantopian was where I cut my teeth on computational finance and automated trading, it was what inspired me to build a fintech startup in the first place, so in many aspects, our product being displaced was a validation of product-market fit if nothing else. In retrospect, at that time we should have switched to the ML Ops market instead, which is booming right now. Algorithmic trading, or at least the consumer focused variety that we were trying to sell, had a stack that is very similar to your usual ML stacks. In the two years I learnt about enterprise sales, the various shenanigans involved in FINRA, SEC compliance, and was tremendously valuable in terms of growth.
These days however we are mainly doing productivity software for voiceovers and transcriptions with a bit of ML thrown in (voice cloning research). Fast growth, easy traction, great market. Not as lucrative as fintech, or at least a lower ceiling, but at our current rate of growth I am certainly not complaining. It is hosted mainly on Google Cloud, AWS, and other providers (after having to build our own Heroku we have had enough of DevOps)
https://twitter.com/narrationbox
If you are looking for advice or suggestions on building your own visual programming systems, I am available for consulting services.
(On second thought I should probably turn this comment into a Medium post)
First, let's look at the size of the keyboard in relation to the screen: it's huge. In most laptops, it is about half the size of the screen. Keys easy to use— they all work the same way, and we all grow up knowing how to use letters. You can press several at once, press them in quick succession, plus they are huge and haptic. You don't have to be precise with how you press one, and you feel it when it presses: your eyes, fingers and ears are all telling you that the information you're transmitting is making it to the screen.
You effectively have a very big, very good, dedicated toolbar and the rest of your screen is either your canvas (the text area) or can be used for other tools to augment your programming.
With visual programming languages, you have to reserve part of this screen real estate, for the input. It's like having to put your keyboard on screen: leaves you less space for the canvas or additional tools. Moreover, these UI elements are often smaller than the keys in a keyboard, and hovering/clicking something with a mouse doesn't provide haptic feedback. All this means a little more mental effort when composing with the mouse, and doing all the aiming, clicking, dragging and dropping. It's more "precise" and "delicate" movements that require more "attention", if you will.
On top of that consider that there is no visual programming environment is ever "fully" visual— there's always typing involved at some point or another. You have to enter a number, a string, the name of a variable, etc. All this switching between the keyboard and the mouse is even more movement, more attention, more cognitive load to the layout of ideas. There is a reason all these modern IDEs have a "VIM mode"— you'd think our new tools would replace an older, more complicated way of doing things, but VIM has managed to survive in the hearts of experienced programmers because it allows them to type without reaching to the mouse.
Let's delve into this. For an unexperienced programmer, "wording" the idea (remembering the syntax, etc) probably takes enough time that laying it out is not the bottleneck. For an experienced programmer, finding the idea is the bottleneck, but once found, wording it is quick, and so laying it down becomes the bottleneck. Being able to express things quickly becomes important.
Moreover, revisiting this "finding the idea" and "wording the idea"— because wording the idea is fast, an experienced programmer might write while it's still looking for the right idea, as a way of "thinking out loud". They will type something, delete it, type something else, backspace repeatedly, etc. Seeing it in front of them helps the idea materialize, kind of how a musician might play notes as it's thinking of a melody, or digital tools allows artists to paint strokes and undo as they draw [1: notice in this link how a digital artist works. They're constantly drawing and erasing. It's the first video I found when searching "painting digital"].
This is harder and slower to do with current visual tools.
There are very established graphical programming languages in the art-scene. They are easy to get in to, but become messy for complex projects.
For the erst stuff think they are a bit like jupyter notebooks, but even better: everything compiles in real time and you can see it working. On the other hand it quickly just becomes a spaghetti mess..
For everything else you need things with a limited scope. A single state to track (workflows, conversations) or really basic logic and strong abstractions (mangling two APIs together). Beyond that you need a programmer anyways and his gain from a GUI is limited.
Generally there are a few things hard to represent, e.g. abstracting and recycling code (writing functions), parallel processes, state and highly interconnected things.
A few examples for the curious:
For Visuals:
- quartz composer (old by now) http://www.mactricksandtips.com/wp-content/uploads/2008/03/n...
- touch designer (modern, very nice nesting, you can zoom in to groups): https://youtu.be/hbZjgHSCAPI?t=49
Music:
Pure Data and maxMSP (not strictly for just for music): https://youtu.be/rTQgfhsQ7xo
Bitwig Grid (very skeumorphic, yet one of the modern "modular" ones. I dig the look of the droopy bezier curves though): https://youtu.be/dNdhbHGeHPw
More recent and really interesting to me is "no-code" environments that are now gaining traction.
Business logic:
BPMN + Camunda (you still need to code everything in text, but you can shuffle the flow around afterwards): https://youtu.be/HxtZf5VD6lQ?t=625
No-code API plugging:
Appmixer: https://uploads-ssl.webflow.com/5a9d00dba5e9fa00010cb403/5c8...
AI-assisted chatbots:
Cognigy: https://youtu.be/QSJ-nTwjn-c?t=1525
The tooling was pretty bad (it was a plugin for older version of eclipse, had pretty bad bugs). We added our functionality to that plugin, and these were big usability wins, but also it made the plugin even less stable.
Sometimes the graphical editor would "desynchronize" and if you haven't noticed you could be editing the process in graphical editor for 2 hours, but the XML file would remain unchanged (or even broken).
So developers became paranoid about changing to the XML view every few seconds to check, or they just edited the XML directly all the time :)
We had problems with several things that are solved in regular languages. For example there were subprocesses (analog of functions) - you could invoke a subprocess, pass arguments, wait for it to end, and receive return values from the subprocess. That solved problem of duplication of common functionality, but also required us to add support for pretty complex mappings when invoking subprocesses.
For example we had a node that does a HQL query and shows the results in table on the user terminal. These queries were parametrized, and some parameters were constant in each process, just changed between processes. So we added a way to specify these parameters. Then there were some processes where it would be useful to specify additional conditions for a query in the graphical editor. So we added that.
Then it would be useful to pass around that conditions between subprocesses, and fill some parameters there in some subprocesses, while the others remain to be filled by the app server.
It was much too complex (I am mostly to blame - I was straight out of university and was amazed by this cool technology and wanted to add all the bells and whistles I could think of :) ).
What we should do instead is just duplicate the subprocesses with different conditions, or construct these conditions in java code on app server and just call that from the graphical designer.
Eventully like half of all our business logic was encoded in the nodes that invoked subprocesses in the mapping between inner and outer process variables.
The other half was mostly in the conditions on transitions between the nodes.
Both of these things weren't visible when you looked at the whole graph - transitions only showed labels (which were sometimes outdated vs the REAL condition code), and the invoke process nodes showed nothing until you clicked on them.
So when you had a big process with 40 nodes you had to click on each node (and scroll through sometimes 30 mappings) to see "who set that variable".
Same with transitions - you had to click on each transition to see the conditions.
We tried to show the conditions in the main view but it wasn't easily changed in that plugin.
Overall I think the graph language was great, the automatic transaction and persistence support was the best things about it, but the visual programming aspect of it was rarely useful, and very often problematic.
I would now go the other way - have the graph language be text-first (with some nice DSL), and rendered to a nice graphical view on-demand.
Better tooling could probably help, but we spent maybe 3 man-years on fixing and improving that plugin and it never worked great.
[1] https://docs.jboss.org/tools/whatsnew/jbpm/images/multiple-p...