What would the web be like if you could tell it what you want to do as easily as you currently tell it where you want to go?
Mozilla Labs is starting to experiment with linguistic interfaces. That is, we’re playing around with interfaces where you type commands and stuff happens — in much the same way that you can type a location into the address bar in order to go somewhere.
I think this is cool because, for one thing, I think language-based interfaces are seriously under-explored compared to pointing-based interfaces. For another thing, I used to work on a project called Enso. Enso’s a language-based interface, where you type commands in and stuff happens. I think we got certain things right and certain things wrong in Enso’s UI design, so I want to take another crack at doing it better.
What makes a good linguistic UI?
Here’s my current theory.
- It’s easy to learn.
- It’s efficient.
- It’s expressive.
Those are the three “E”s. Let’s unpack ’em a little.
“Easy to learn” should be self-explanatory. Even if a tool is super-efficient and incredibly powerful, if it’s too hard to learn, it’ll be relegated to the “experts only” ghetto. Yeah, that’s right, I’m talking about you, Unix shell. (ahem, Unix/Linux/Posix/BSD/etc.) The original language-based UI, the oldest UI still in common use, and the one which has given the whole concept of “type what you want to do” a bad name for the last thirty years, serves as an excellent counter-example. For a linguistic UI to be easy to learn, it should strive to avoid all of the following misfeatures of the shell:
- Not discoverable: There’s no guidance given to a first-time user. You type some letters and nothing happens: it feels like shouting into a void. If you don’t already have the basic commands memorized, there’s no way to figure out what they are.
- Cryptic names: Whether for historical reasons or for brevity, the standard names of commands, programs, and locations are all called stuff like ‘tar’ and ‘mkdir’ and ‘/usr/local/bin’. Because these names are unnatural and unfamiliar, they have to be learned by rote.
- No feedback: I just entered a command and all I got back was a blank line! It worked, but what did I just do?
- Options are hard to remember: Does the
ln
command take the source file first or the destination file first? What does the-z
option ontar
do again? - Really easy to make mistakes: One wrong character and your innocent command is transformed into a ruthless file murderer. And there’s no undo!
But the CLI isn’t all bad. Obviously, if it was all bad, there wouldn’t be so many people still using it! I’m a programmer. I live on the command line. The learning curve was years ago and now it’s second-nature. I couldn’t live without it. So what are the good points?
The first good point is that you can get a whole lot done with just a few keystrokes, thanks to the very short names, the tab-based autocompletion, and the command history that lets you easily repeat or modify earlier commands. This makes it a very efficient interface. You can learn more about the precise (quantitative) definition of information-theoretic efficiency in Aza’s article, Know When to Stop Designing — Quantitatively. There are logarithms involved. If you don’t feel like doing math, all you need to know for now is this extremely simple concept: the fewer keys you have to hit to get the computer to understand what you want, the less wasted effort, the more efficient the interface.
The second good point is that it’s not just a set of commands, it’s a language. (BASH is turing-complete.) Pipes, stdin/stdout redirection, backticks, environment variables, etc. form the grammar. Executables (“small programs that do one thing and do it well”) form the vocabulary. Every command line you write is a little one-time program. With shell scripting, you can make that one-time program into a reusable command. You’re not limited to a small set of commands: Like any programming language, or any human language, an infinite number of ideas can be expressed with a finite vocabulary. I call this quality “Expressiveness”.
There’s our three “E”s: Easy, efficient, and expressive. Unix has… well, two out of three ain’t bad! It beats the Mac/Windows style GUI hands down in both efficiency and expressiveness, but loses badly in ease of learning.
So here’s the riddle. This is what attracts me to the challenge of language-based UI:
How can we make a UI with the efficiency and expressiveness of the Unix command line, but that’s easy to learn and that won’t shoot you in the foot?
Enso was our attempt at a very easy-to-learn linguistic UI. You hold down the Caps Lock key and start typing a command. Enso displays an automatic completion of whatever command best matches your input, along with a description of what the command will do if executed. On the lines below the input, Enso shows suggestions for other commands similar to the input. You can use the arrow keys to hilight one of the suggestions, and release Caps Lock to execute it.
It’s far from perfect, but I’m still proud of how easy Enso is to learn. Once someone grasps the basic idea of “Hold down caps lock and type a command”, they can almost always figure the rest out on their own. The suggestion list makes them passively aware of what other commands exist, while the description text teaches them what commands do. Seeing what Enso thinks you mean before you execute helps reduce errors, too.
However, this design for Enso had some basic limitations. Commands were always verb-noun, like “Open notepad”, and could only take a single argument apiece, so the expressiveness was limited. We had plans for how multiple commands could be chained together, but this still hasn’t been implemented.
At Humanized, one of the most common feature requests we got was for a way to abbreviate commands — especially the very frequently used “open” command. Another of the most common feature requests was for the ability to enter the noun before the verb — e.g. to open notepad by typing “notepad” first instead of “open notepad” (the way Quicksilver works). Both these requests were clues that people were being frustrated by the need to type “open” over and over again. The interface was inefficient because the first five characters of “open notepad” were wasted keystrokes.
We tried to improve the efficiency by allowing the tab key to be used to autocomplete the rest of the current word, but hitting tab while holding caps lock requires finger contortions, so this feature was seldom used. We wanted to stick with verb-noun because of the similarity to natural English word order, and we avoided abbreviations because we wanted the behavior to always be consistent, but I’m now convinced the users were right — the inefficiency was a major problem.
There are plenty of other linguistic UIs we could analyze using the three “E”s. For example: The Awesome Bar in Firefox 3 is sort of a linguistic interface by my definition (You type stuff in and stuff happens). It’s very easily learnable (most people can figure it out without being taught), and also very efficient (usually just a few keystrokes to get to the website you want) but it’s not expressive at all (all it does is open pages).
Are the three “E”s mutually contradictory? Do we have to settle for “Easy, efficient, expressive: choose two”? That would be pretty depressing. But I’m not ready to throw in the towel just yet.
How should the ideal linguistic UI behave?
Based on all of these experiences, here’s my current thinking about what the ideal linguistic UI should be able to do.
For ease of learning, it should:
- Accept input in something very close to the human language I’m already familiar with.
- Give me clues about what commands are available.
- Give me clues about what I can type next.
- Give me clues about what the current command will do if executed.
- Give me suggestions about other commands it thinks I might be looking for.
- Help me understand what ranges of arguments to a command are valid, and what the arguments mean.
- Propose commands appropriate to my working context or to the type of data I have selected.
For efficiency, it should:
- Allow the user to start with the noun or to start with the verb.
- Let me autocomplete a partial word with a keystroke.
- Recognize words even if they’re super-abbreviated.
- Remember what suggestions I’ve chosen in the past and pop them up next time I give the same input.
- Let me partially enter something, see the suggestions, choose one as mostly-right, and edit that one some more before executing it.
- Guess, from my context and my selection, what I want, and fill most of it in for me, while letting me easily override it if it’s wrong.
For expressiveness, it should:
- Handle commands with multiple arguments, including optional arguments, that can take various data types.
- If I have data selected, let me use that selection as an input for any of the multiple arguments — or for none of them.
- Let me chain commands together, with the output of one going to the input of the next, like Unix pipes.
- If my input could mean more than one thing, give me a sensible way to resolve the ambiguity.
- Let me compose a complex command out of small parts, in the flexible way that natural language does.
- Let me save a complex command that I’ve created and give it a simple name so I can re-use it in the future.
- Give me an easy way to create my own commands — and to share them with others.
An impressive list of demands? Yes!
Conflicting design goals? Probably!
Impossible? I don’t think so!
Tune in next time for the design workshop where we try to satisfy all of these constraints at once. I’ve got some ideas, but I’ll be looking for your input, too.
July 22, 2008 at 12:21 am
This all sounds great, Jono! Very explicit and well thought-out requirements. I look forward to learning more about how to solve this problem.
July 22, 2008 at 8:23 am
I have already started implementing something close to what you describe. Normal unix commands are wrapped in human language.
So ‘scan localhost at port 25’ becomes nmap localhost -p 25.
Download alpha code here.
https://sourceforge.net/project/showfiles.php?group_id=180091
(elevate 0.2.0)
You can try
web
open web
launch firefox
open www
to open firefox
Small txt files are describing the commands. You can add new commands by writing a simple .ini file.
July 22, 2008 at 10:00 am
Hey. So im a bit “annoyed” that your not making enso more awesome, but if your taking the awesome from enso to my other favorite application, Firefox, thats just .. well.. awesome.. Thanks
Of course I would be please to put my own input here, but I think you got it covered ๐
July 22, 2008 at 10:19 am
Of course, programming languages are also linguistic interfaces. But they are truly for experts. I believe that there is an inherent trade-off between ease of learning (not ease of use) and expressiveness. The piano is more expressive than an mp3 player but it takes much more time to learn. Ultimately expressiveness is related to language. You need a rich language in order to be expressive, but it takes time to master a rich language. That’s why it is rich ๐ I do think you can get ease of use and expressiveness. A piano is easy to use once you’ve learned how to play. A sword is really easy to use (just grab the handle and hack at your opponent) but you need to study fencing before you can wield it effectively. I think the ideal interface is like a sword. Anybody can use it immediately, but if you are willing to invest more time you’ll be more effective and more expressive. I guess a more formal way to say this is that the interface has a staged/discoverable complexity. As you master one layer you can move to the next layer and increase your power. You don’t have to learn everything up front.
July 22, 2008 at 12:03 pm
That is a nice post. You highlighted the lack of discoverability of the bash very well. When I found out about the tab-key-completion suddenly I could learn much faster, because I knew, what to type into man.
I do not think that a discoverable interface must be close to a human language by which you probably mean english. For vocabulary its fine, but grammar must be artificial IMHO to make it easy to learn. I think the pinnacles of easy to learn grammar is lisp and Forth. Possibly also smalltalk but I don’t know anything about that. The problems of these languages is of course the vocabulary because nobody understands cons, car and cdr without or help. Slime solves that problem partially by showing the function signature in the minibuffer of emacs, which is very helpful to learn.
Another thing you mention is to make the interface forgiving.
I think that is a very very important and very very difficult point. Because as soon as you change state there is no easy general way to rewind in time (and I think that should be a goal in order to keep the UI efficient). Stateless interaction (as in functional programming) but usually doesn’t help the user too much. Or you could have complementary commands like pushd and popd.
Are you and emacs user? I think it is quite discoverable for a very very complex program.
Please keep writing, I’d be very interested in your ideas.
July 22, 2008 at 4:52 pm
What you are describing sounds interesting in theory. I wonder what use does Mozilla has for this though, as I don’t believe such an interface is appropriate for web applications.
July 22, 2008 at 5:08 pm
Hi Eran,
Thanks for asking. “What use does Mozilla have for this” is going to be the subject of an upcoming part of this article series, in which I will try to convince you that a linguistic interface could in fact be appropriate for web applications.
July 22, 2008 at 8:49 pm
It seems to me that all of your comments in efficiency can be easily resolved by learning from what actual languages do. It seems to me that all of these are things that real-world languages resolve through the use of context. For instance, if I’m in the Finder, and I type “text” I’m probably trying to open text edit. If I’m in Word, have some text selected, and type “text” I’m probably try to convert to plain text.
Ideally, what you’d want is a program to do the action, while at the same time asking for more context. “Okay, it’s plain text now. Is that what you meant?” and lets the user go “No, I meant open text edit” for an automatic undo.
But people are much less patient with machines than with other people. So the bar is probably higher.
July 22, 2008 at 9:05 pm
Robert said:
>I do not think that a discoverable interface must be close to a human
> language by which you probably mean english.
Mozilla is pretty strongly committed to being an international company for an international user-base. (We just got over 50% market share in Indonesia, did you know?) So when I say “human language” I don’t just mean English. I’m writing a prototype that’s based on English but I’m also thinking about other languages.
Word order will be important. I’m close to fluent in Japanese, which uses subject-object-verb order (as opposed to subject-verb-object in English). So a Japanese version of this interface would ideally be designed to work well with the verb (i.e. the command name) at the end. (Which happens to be similar to a stack-based language like Forth…)
The important point is that localizing a natural-language interface will be more involved than simply replacing the command names with translations.
July 22, 2008 at 9:22 pm
I think quicksilver is the most successful linguistic interface so far. It’s very easy to use, it is very effective, searching faster than spotlight, and it’s very expressive, taking 3 arguments sometimes. It’s a pitty it’s only for mac and the creator has pratically discontinued the project.
July 22, 2008 at 9:26 pm
What about an interface like google’s search box?
Let’s call it the ‘Doit-Box’:
a nearly grammar-free commandline with semantics defined by the crowd. It queries a public database and instead of weblinks it returns some sort of macros to choose from.
This results in good learnability as you just have to guess what words other users utilized to describe a certain task. And after all, human languages’ semantics were defined by the ‘crowd’, too.
Implementing this kind of interface/process brings in some tough problems, but it might be feasable within the browser sandbox. The idea is outlined in more detail at http://cryptocarnivore.wordpress.com/
July 22, 2008 at 9:53 pm
The ULITMATE USER INTERFACE
The ultimate UI is a personal, fully user customizable, fully user definable, transportable, master UI which sits on top of all applications where the GUI and CLI worlds are one and the same, merged.
Because you are using old technology, typically, the standalone mouse and keyboard, you are limited to choosing between GUI and CLI.
I have developed and been using the Worldโs Fastest Keyboard for over 4 years.
Because I have total control of the computer screen from the home row for pointing, clicking, typing, scrolling, deleting, backspacing, and escโing, I can operate in a merged GUI-CLI world quickly and easily.
The balance between how much GUI and CLI to use is left up to the user. The user knows what works best for them.
The problem with Enso is the caps key, mode switching is old technology.
I hope to present papers on the Worldโs Fastest Keyboard and on the Ultimate Master User Interface next year as part of my requirements for a PhD in advanced input technology and interfaces.
from the โfather of the perfect keyboardโ aka inputexpert
July 22, 2008 at 10:39 pm
Hey “inputexpert”,
I am interested in hearing about your ideas. However, you’ve been copying and pasting this exact same blurb into blog comment threads for how many years now? I remember seeing the exact same words on the Humanized blog and other user-interface articles around the web in 2006 and 2007.
I know PhDs take a while, but when are we going to see this World’s Fastest Keyboard? When are we going to see this Ultimate User Interface? Where’s the link to your work? Shouldn’t you be working on your paper instead of mass-posting to blogs?
This is a warning. I approved your comment because it is on-topic and you included a valid e-mail address, but your behavior is dangerously close to spamming. If you put the same comment on my next blog post I’ll consider you a spammer and I will block you.
July 23, 2008 at 9:02 am
A nice post, sums it all up quite well. Some additions:
It take it you are also assuming the requirement that the linguistic UI (I guess it’s one of the contenders for the LUI acronym?) “lets me do it all without my hands leaving the keyboard”. Certainly a valid goal, though one shouldn’t forget the possibility of combining pointing-based input to it, perhaps as an accelerator (think the way OS X’s Command-Tab switcher lets you point to the app’s icon with the mouse and then select it normally by releasing Command).
Similarly, keyboard input doesn’t necessarily mean just typing in the command and using only non-literal keys like arrows, tab and return for manipulating it; you could have a whole array of context-dependent hotkeys or key combinations mapped for each part of the command-entry process (nouns, verbs, arguments). These might work a bit like regular keyboard shortcuts (Ctrl-O for opening your “noun”, Ctrl-7 to pick suggestion number 7 down the pop-up list etc.). The difference is that because there is more context available, the available hotkeys are fewer in number than system-wide or application-wide keyboard shortcuts. There are few enough active at a time to be graphically illustrated next to your command line (or the part of it you’re currently editing), fading away once you move to a different part of your command, or exit the linguistic UI quasimode. Besides helping learnability, the graphical representations could also enhance efficiency by acting as clickable (or just pointable) targets – accelerators, as per the previous paragraph, should this fit the design of the LUI.
July 23, 2008 at 9:28 am
>Let me chain commands together, with the output of one going to the input of the next, like Unix pipes.
perhaps a yahoo pipes like thing would be great for that.
Controled by the keyboard you could wire together little windows, all enso like.
like writing in a enso like overlay but instead of submitting hit another key which rearanges the overlay so that there is space for another one. Both are prewired so that you can write your next command in here.
With a stroke you could switch the pipe direction or add another command
if it will be a more complex pipe structure one could use the mouse to place connections between the overlays turning the direction and eventualy build little “programs” with that.
The rearanging of the overlays is one of the keys I think because if there are more than 3 or 4 of those it could get messy.
A little genetic algorithm, which improves the placement of the overlays each generation could, given a fitness function for scoring total clarity, sort that problem out.
I would like to see such a thing in future works, as well as I would like to see some other technologies missing from most applications.
Such as learning behaviour, physicsengines, rotation of everything.
I am currently writing a diploma about that topic, so I hope to present some results soon.
As you pobably can see my english isn’t native so, mail me if I couldn’t describe my idea comprehendible.
July 23, 2008 at 10:11 am
Nice to see someone on this that speaks and thinks more than English. That way, my l10n head doesn’t have to be all that paranoid. I’m curious how things will go here, in particular when keyboards are involved, which don’t always map to the language you’re in.
I remember there was some BASIC dialect or something that was localized, does anyone remember which? Does that still live, any lessons to learn in there?
The other part that I’m thinking about is the “ease of learning” vs “efficiency” piece. I really think that those are conflicting goals. My ad-hoc solution would be to gradually move helping UI out of the way as a user gets more experienced. That’s not a simple task of course, in particular as expressiveness grows. Like, someone might be very fluent in ‘cat’ and ‘ls’, but not so in ‘rm -rf’. It’d be bad mojo to remove help there ๐
July 24, 2008 at 7:28 am
> gradually move helping UI out of the way as a user gets more experienced. Thatโs not a simple task of course, in particular as expressiveness grows. Like, someone might be very fluent in โcatโ and โlsโ, but not so in โrm -rfโ.
Perhaps it could be as simple as putting a “close X” on the help overlay, a corresponding button (with a hotkey) for getting it back if needed, and saving the status of whether help is shown or not on a per-command basis. Something like this would allow one to ditch the help for ‘cat’ and ‘ls’ and still have it appear when entering new or unfamiliar commands, and be able to quickly get it back when one needs to check something.
July 24, 2008 at 11:30 am
What a brilliant idea. Less GUI, more typing. In fact, the same thing applies to scripting languages – why all the clean abstractions, what the programmer really needs is more flexibility, so by extension, we should all develop in machine language. NOT
July 24, 2008 at 4:51 pm
Jono,
I recently started using Enso at work (I had it installed at home already but didn’t have much use for it there). My company uses Sharepoint, which isn’t fully compatible with Firefox. I had IE opened to Sharepoint, with the url highlighted, and I Enso-ed “learn as open sharepoint”. Simple enough. Except that Firefox is my default browser.
I’m glad to hear you’ve come around about the “open” verb, but this is another area where Enso could improve. Instead of using the same “learn as (open)” command for urls and applications, you could use “learn as” (with “open” optional for backward usability, which basically means ensuring that users are backwards compatible, which is as important as ensuring that the application is backwards compatible) for applications and “learn as url” for urls. Then the commands could go something like this:
1. “Learn as (open) ie”
2. “Learn as url sharepoint”
3. “(open) ie sharepoint”
Then, I could leave firefox as my default browser, and “(open) gmail” would open it in firefox but “(open) ie gmail” would open it in IE. An additional improvement would give me an easy way to command Enso to open in a new tab (if a browser is already open) versus in a new browser:
1. “ie (tab) sharepoint” (since I want tab to be default, it’s optional), vis-a-vis
2. “ie new sharepoint” (“new” isn’t necessarily the best name for it, but you get the idea)
July 25, 2008 at 8:06 pm
[…] « Language-Based Interfaces, part 1: The Problem […]
July 26, 2008 at 1:28 am
[…] 26, 2008 Now that I’ve blogged about Ubiquity, you should understand why I’ve been obsessing over the properties of a good linguistic UI. It’s not an academic problem: It’s one of the interfaces to the extension I’m […]
August 3, 2008 at 2:40 am
keep at it…you are on to something here…
August 13, 2008 at 6:32 am
There’s an open source OS in development called Haiku (www.haiku-os.org). The idea is to start with BeOS as a foundation and come up with new ideas to solve problems that Microsoft, Apple and the unix world weren’t focusing on. Back in 2007 I wrote this on the Haiku glass elevator list (for future development)
http://www.bug-br.org.br/pipermail/glasselevator-talk/2007-January/000253.html
I just found out about Enso and your blog today (thanks to an initial link by…a Haiku developer). This is definitely the logical follow up to what I was thinking, that we need to go beyond the visual UI that everyone tweaked for the past how many years and go back to the verbal.
I’d like to see a strong verbal UI built into Haiku on the OS level as this would improve usability for all native apps on the system. The web, which firefox covers, is its own platform but there’s more to computing than just the web browser!
August 14, 2008 at 2:59 am
There’s an additional criterion you might want to think about applying to an interface mechanism, besides learning, efficiency, and expressiveness: that it be “suggestive.”
What do I mean by this?
Well, for a non-expert user, the problem with the command line is that it “suggests” nothing — it shows no nouns, it shows no trace of past verbs. (You can reveal both of these, of course . . . if you know some verbs.)
The System 7 desktop, by comparison, supplies you with many “places” and “things” — some of those places/things turn out to be verbs (like the menu items), and some of them turn out to be nouns (like the hard drive icon).
Crucially, in a well-fashioned GUI, you will almost never *forget* a piece of functionality that you have used. Verbs can suggest themselves to you by being neighbords of other verbs.
Contrast this with the bash, which I myself (to take a random example) used to be quite adept with 10 years ago (as I was with System 7, for that matter). I have forgotten a simply massive amount of Unix verbs (and adjectives, and adverbs…). It’s awful. I amassed so many little nuggets of wisdom and power, yet the system did almost nothing to help me keep track of what I had learned. Would anyone “forget” how to use System 7 (imagining, for the moment, that they had not gone on to use System 8, 9, and X…)?
The bash powers that have been lost will never “suggest” themselves to me again — if I absolutely need them, I will have to resurrect them through out-of-band channels (such as a web search). There’s no there there.
And Enso had some of the same failings as bash, in terms of being suggestive (in my sense). As an Enso user, I had no idea of the real extent of its powers because there was no mechanism of exploration, no “place” to “keep” stuff, no things that are neighbors of other things — there was no there there. There was no systematic way of seeing the possible.
Auto-complete will make me more efficient at getting where I want to go. But it will not really solve the problem that I don’t have a map of all the places I *might* go. Or all the places I’ve been.
September 1, 2008 at 10:42 pm
I would add two more challenges: extensible and secure.
These five goals are probably the foundational problems of computer science ๐
September 4, 2008 at 6:28 pm
[…] months ago I wrote a post describing the properties of the ideal linguistic interface. Now that we’ve released Ubiquity 0.1.1, I want to look at how well the current state of the […]
September 4, 2008 at 7:23 pm
[…] Language-Based Interfaces, part 3: Report Card for Ubiquity 0.1.1 September 4, 2008 I believe in tough love for my brain-children. It’s report card time. Let’s see how Ubiquity 0.1.1 holds up to my exacting standards. […]
September 12, 2008 at 12:06 pm
I can’t find a copyrigt notice: I’ll translate a parte of this article, if any problem let me know..naturally..great article!
November 11, 2008 at 8:26 pm
The text-based MUD/MOO/MUCKs (and to some extent their single-user text adventure game ancestors) may have something to offer in this area. For an interesting adaptation of that type of interface into a “natural language interpreter”, see:
http://netjam.org/projects/quoth/
November 19, 2008 at 10:13 pm
Hey, Jono. Very interesting article.
On the triple E’s, I think they are mutually exclusive ( or 2 of 3 thing ) . But that should not be something bad.
All the users have a learning curve within an app. When the user learn more about the application, the application should give them more power, and keep them interested.
Then an application could start being easy then efficient and then expressive by stages( ๐ )
For instance ( this is not a LUIs but helps as an example )
Intellij IDEA from JetBrains. It is VERY very easy to use at first, lets you do what you want to do without having to learn all the options, views or menus. If you want to write code, you just write it and you’re done.
With the time, you start to learn by hints in the app, how to do things faster, then a few keystrokes are enough for common tasks ( type soutv + TAB for instance) and it becomes efficent.
After several flight-hours and when you’re in total command of the app, you can add plugins and extra configurations to make this tool really powerful and then it becomes “expressive” ( well you get the idea the extra “E” )
The opposite is Eclipse ( at least for me ) The first time I tried to modify a simple file, I was totally confused about the perspectives. Could’t find the right menu option, and I felt very frustrated. I shut down the app and edit the file in notepad. This application was hard for me to use at the beginning. Now I know it is as powerful as Idea, but it took me a long while to find out ( and lot of friends help ).
The same happens with video games, they start with level 0 where everything is really easy and they end up with a lot of tricks and in level 60 is as interesting as the previous 59 levels.
So my point here is, LUI’s could be all the three E’s you mentioned but not necessary at the same time, they can evolve and let the user discover new features when he/she is ready, not before.
Enso is super-easy to use and to learn, very intuitive, but I share your users complaints, is unefficient by having to type ‘open’ each time, and after a while there’s not so much to do with it ( I mean second year of usage probably won’t be as interesting as second week )
By letting the user discover and master advanced options ( intuitively ) a LUI can achieve the triple E’s you mention. The thing is, user won’t have them all at the same time.
By the way I’ve working on the UltraSuperArchi-KeyBoard mind reading stuff, microwave activated user interface, but you (nor anyone) can have a look at it because … well… it is… ehm… utra secret ๐
Nahh seriously. I came across to Enso because I was so frustrating with file management and I was looking for something as powerful as unix-CLI to use in windows ( I installed Cygwin but ended up using “explorer .” all the time after a couple of commands ) I think I will create some short app in the same line of Enso in the next months. Thank you all ( @humanized ) for the inspiration. I hope you look this kind of endeavor with good eyes.
Salud!!!
November 19, 2008 at 10:30 pm
Ouch… I’ve just realized there’s another two articles on this topic from you.. I hope my comments are still valuable. Well the internet is timeless…. it’s been about half year since you wrote this. Byte!
December 3, 2008 at 3:35 pm
Jono is my prophet ๐
February 6, 2009 at 6:08 pm
only language, it could also be java beans, downloadable end checked, commands are verry short, only when you see a macro it can be verry long you could write pages of macrocodes, if you go to the java-bean, and when the java bean s a compiled , by thirt party checked object, you could be visualated the object with the heater object call en you could visualy set the information stream to end from the called object to youre own javacode…
then you should never have to programm more code then nessesary while the web-site database has instructions for all ready programmed java-beans/objects….
June 24, 2009 at 5:02 pm
Would you say VI is also one of the biggest violators of human interfaces? The program is just riddled with hundreds of hotkeys that usually have little representation on what they do. I would love to see a concept of how VI could be rewritten to be more human friendly while maintaining efficiency…