Manage episode 199083013 series 1402166
- 01:11 - The Design Patterns Book
- 02:45 - The Eclipse Project
- 09:24 - Language Server Protocol: Overview
- 15:16 - What can you do with a server that implements the LSP? Incremental usage?
- 20:12 - Keeping the Tools in Sync and Refactoring Support
- 24:33 - Keeping it Performant
- 29:41 - What kind of proliferation of codesmart tools are there that implement the LSP?
- 34:51 - What are the challenges encountered trying to build abstractions that work for 40 different languages?
CHARLES: Hello everybody and welcome to The Frontside Podcast Episode 97. My name is Charles Lowell. I’m a developer here at The Frontside and your podcast host-in-training. And with me today, we have two very special guests. They have been working on technologies that have run very parallel to my entire career as a software developer. And we’re going to talk about that. So with us today are Erich Gamma and Dirk Baeumer who are developers on the team developing VS Code, which if you’re in the frontend space is taking that area of development by storm. It’s just amazing, some of the things they can do. Lots of people are using it every day. Lots of people are trying it. And so, we’re going to talk about the technologies that underlie that and the story of how it came to be. So, welcome Erich and welcome Dirk.
ERICH: Hello from Zurich.
CHARLES: Alright. Zurich to Albuquerque. Here we go. As a first start, I would have to say my first contact with this story, I at least have to mention it because – and this is for Erich – you wrote a book that was very, very instrumental in my formation as a young developer. I think I was about 22 years old when I read ‘Design Patterns’. And I don’t know. I still carry a lot of those things with me to this day, even though a lot of things have changed about the way that we do development. I still carry a lot of those lessons, I think especially things like the state pattern and the strategy pattern, and stuff like that. I want to move onto other things, but I was hoping that we could talk just a little bit about, what are the things that you find still kind of relevant today?
ERICH: Well, now as you said, some of the things are kind of timeless and we’re lucky to have found these things. And I still love all the patterns. But I must say, things have changed, right? So, at that time, we thought objects are very cool. And as we have evolved, all of a sudden we think, “Oh, functions are actually very cool, too,” right? Closures and so on. So, I think we got more broader and of course if you use functional programming, you have many more patterns available as you program. So, I feel some of the object thinking still applies. But that’s not the only thing that counts anymore. Today it’s functions, stateless, immutability, and all those things within functional programming which is [straight] and which [inaudible] in our team.
CHARLES: Yeah, yeah. I would love to see an update to how do these concepts transfer into functional programming. But anyway, just wanted to say thank you for that. And it was about the same time that, a few years after, I don’t know the exact same timing, I want to wind back. Because we’re going to talk about VS Code but before VS Code, there was a project that both of you all worked on called Eclipse, which I also used. Because at the very beginning of my career, I did a lot of Java development. And it really opened my eyes into a level of what tooling could do for you that I didn’t see before. And I was wondering how did you arrive to there? Because before that, I was using Emacs and Vim and Joe’s Editor and things that were editing the text files. And how did you kind of arrive at that problem? Because I feel like it’s very similar to the one that VS Code solves, but this was what, 15 years ago?
ERICH: I think it’s older, right?
DIRK: It’s 17, 18, yeah. Yeah, yeah. It was end of the millennium, right? So to be honest, Eclipse wasn’t the first development tool we worked on. Then, we worked on the company ObjectTechnologyNational. They worked on Smalltalk tools. And of course, Smalltalk had a great IDE experience, right? So back then, Java became popular. One idea was, how can you preserve the great Smalltalk coding experience? [Inaudible]
CHARLES: Ah, okay.
DIRK: [Inaudible] and find all references, method-level history versioning, and so on. So, that was the input that got Eclipse kicked off. And one idea we had at that time, Eclipse is our opportunity to make everything right. And as we have seen now, when we did VS Code, we could even improve what we have [inaudible] at that time.
So an example, in Eclipse we thought plugins are very cool and we have kind of a microkernel. And you load all of the plugins in the same process, they have a rich API, and so on, which is great. But we found over time, if you have lots of plugins and they do bad things and they run in the same process, it’s not the best thing.
CHARLES: Ah. Right. And so…
DIRK: [Inaudible] have a different architecture. We believe now in isolation, separation. So, we now run extensions in separate process that communicates through RPC with the IDE so that we are in full control. And we can always say you can save the tool, save the document, no matter how bad a plugin behaves and decides to do an endless loop. Because in a separate process, the hope is still one CPU is open, available for you, that it can be safe from the other process. So, that’s some example, right? Eclipse has done many things right, but the multi-process architecture I think is a major switch. And the other major switch is at Eclipse time you think Java is cool. Everything has to be in Java.
DIRK: No longer think like that, and that brings up this other topic of then the language servers that we can also talk at some point.
In Eclipse you needed to program in Java. With the LSP you can program in any programming language. In Eclipse, if you really want to try to do something nice with code complete and stuff like that, you had to hook up a lot of stuff. So, we raised that to another abstraction layer where we more talked about what people provide on data and we do a lot more for them in the user interface than compared for example to Eclipse, which lowers the barrier for people to integrate languages in Visual Studio Code than the barrier you had to integrate something in Eclipse. And so, [inaudible] for that one was that there are a lot more tools and programming languages out there that have importance than 10 years ago.
ERICH: I’ll give you an example. So, when we did C support in Eclipse, and it was also the team that seeded it. Of course, it took over and has now a great community behind it in Eclipse. But you wrote the C tooling in Java. And of course, that means you built the parser in Java and then of course, there are great C parsers around, C frameworks. But also it means you cannot dogfood what you write. You write Java but you don’t program in C++. I think which is what makes VS Code so appealing is we are a very aggressive dogfooder. We want to use ourself and of course [inaudible]. That’s why [inaudible] is very good. The C++ guide, they programmed C++ and they write in C++ so that’s how they make it very good, that you have this feedback loop.
DIRK: Your new cool language.
CHARLES: Or maybe we take a Lisp or something where writing the parser is very easy.
DIRK: Even that, you have to resolve symbols and so on.
CHARLES: Okay, okay.
DIRK: Even the parsing [inaudible]. But yeah, let’s take a fancy language like Lisp or whatever. So, the first level I think is you want to get some nice coloring. That’s the first level.
DIRK: So, you get some coloring. And what we do there actually in VS Code is we tap into the community from TextMate. So, we use TextMate grammars to support colors in languages, which gives us access to a long [10:51 tail] of languages. So, to change the [10:54], if your language is not too exotic, you will find the grammar that describes how to color, what the tokens are in your language, and then you can get your language colored. That’s step one. The next step is of course you want to get smarts like IntelliSense and so on. Ideally of course you can say, “Well, maybe there is something already around that has abstracted the parser and you can use this library.”
ERICH: So, then the next level is to say, “Okay, well you have your code you encapsulate it in a server that you can talk to through some protocol.” And now the challenge is what protocol do you talk to? Typically in the language, the library you get, it will use some ASTs, symbols, type bindings. And what Dirk mentioned with lowering the bar is that assuming you have those ASTs, the way you talk then with our tool is through a protocol that is not at the level of the ASTs but at a higher level.
CHARLES: A higher level than the ASTs.
ERICH: No, yeah. A higher or simpler level. Let’s give you an example. You want to find the definition of a symbol in your fancy language. The way the protocol works is you only tell it, in this document with the URI, at this position, I want to find the definition of the symbol that is this position. The request goes over the wire to the other process. Document URI, and the textual position. And what comes back of course now in the server you used AST, you find the symbol, you find the binding of the symbol which means it gives a definition for it. Of course you use your AST to analyze it. But then what gets back to send over the wire is yet another document, the reference, and the position.
CHARLES: I see. So, you’re really like pinpointing a point in just the raw bytes of the document. And you’re saying, “Look, what is here?” And you just want to delegate that completely and totally to this other process. So, the IDE itself doesn’t know anything about the document?
ERICH: It knows about the document, right?
CHARLES: I mean, it knows about the textual positions of the documents and the stream of characters, but not the meaning.
DIRK: True. The smarts are in the server. And you talk to the smarts at the level of documents and positions. And the [good thing is] it’s a protocol, is at this level it makes it easy to integrate into one editor, which is VS Code, but also into other editors. So, that’s why we came up with the idea to have a common language server protocol which allows to provide a language not only for one editor but also for many editors. That was a challenge we had in VS Code. Remember when we started, we were kind of late to the game. We said, “VS Code should be in between an IDE and an editor.” But what we liked from an IDE is of course code understanding, IntelliSense. Go to definition, find all references. But how do you get that for a long tail of languages? We cannot do it all ourselves. So, we need to get a community to tap into. [Similar to] like TextMate grammars are kind of a lingua franca for coloring. So, we are looking for the lingua franca for language smarts. And that’s what the language server protocol is, which means you can integrate it in different IDEs and once you’ve written a language server you can reuse it.
CHARLES: I guess I’ve got two questions. What are the kind of things that I can do with a server that implements the language server protocol? And then I guess the – so we’ve talked about being able to find a reference. And is there a way you can incrementally implement certain parts of the protocol as you go along?
DIRK: Yeah, basically you can. The protocol on the server and the client side talks about capabilities. The server can for example say, “I am only supporting code complete and go to definition and find all references.” And for example, something like, “Implementation hierarchy or document symbols or outline view is not supported.” And then the client adapts dynamically to the capabilities of the server.
DIRK: That’s one thing. And the set of capabilities is not fixed. So, we add them. We just added four or five new capabilities to the protocol last week. So of course, we listen to requests that come from other IDEs, what they would like to see in the protocols that we see in Visual Studio Code, we would like to extend. And that’s the way we move the protocol forward.
DIRK: It’s capability-based and not so to speak version-based. So, [inaudible] versioning at the end of today.
CHARLES: Right. You can incrementally say, “I’m going to have,” if I’m starting to write a server, I can say, “Well, I’m going to only start with just find definition at point.” And that’s the only thing that my server can do.
ERICH: Well, there are some basics, right? Keep in mind you have two processes. And once the user opens an editor, the truth is in the buffering memory on the one process. The basic thing you have to in a language, so you have to support the synchronization of [inaudible]. Once you open a file in the editor, then the truth in the buffer, and then you have to sync it over.
ERICH: [Inaudible] close the truth on the file system and you also have to tell this to the server. Because the server has to know where the truth is.
DIRK: That’s correct. These two open/close handshake methods and change methods, this is the minimum you have to implement. But for example, for Node itself, we provide libraries that help you with this. And the protocol is not very complicated. It’s a buffer. Then it’s change events. Either it’s an insert, a delete, or an edit.
CHARLES: So, let me try and get this straight in my head. I think I understand. The problem is that the VS Code, or your code editor, it’s actually making changes to the buffer, and it needs to communicate those changes to the server. Or does the server actually make the changes itself?
DIRK: The editor does make the changes. So, the protocol is spec’d in a way that as soon as an editor opens a document, the ownership travels from the server for the content to the tool. And the server is basically not allowed to read the state of that content from disk anymore, or get it [inaudible].
DIRK: Therefore, the client guarantees that everything the user does in that document is notified to the server, so that the server can move the document forward.
DIRK: [Inaudible] we see the close event, that basically with the close, transfers the ownership of the document back to the language server. And it is allowed to re-read that content from disk if it wants.
ERICH: Here, the protocol is really data-driven. Dirk mentioned that earlier, right? So, basically what flows between the server and the tool is data. So, what do we mean by data? You ask for IntelliSense or completions at the line. What follows is just the data. A list of completions that flows then from the server to the client. And then the client decides what to do with this data and decides to modify the document by inserting the completion proposed that the user selected.
CHARLES: Right. And then if it decides to make any updates, it needs to send those to the server.
CHARLES: So, if I actually insert the method that I want to call there, I’m going to be inserting nine characters, and I need to tell the server, “Hey, I just inserted nine characters to this document,” something like that?
CHARLES: Ah, okay. So now, I’m starting to understand what you’re talking about when you say data-driven. It’s literally just telling the tool – the tool proposes, “I want to do this rename.” And then the server provides all of the information that is required to actually do the rename. But it doesn’t actually do the rename itself. It just provides the data.
DIRK: A couple of reasons for it. The data effects, at the end of day, it’s again, edit, and it’s more or less the same edits the client sends to the server when the user types in the document. This is the protocol. On top of it, something that you can create a file or rename a file, this comes as a result back to the client. And then there, since it is a client/server architecture, the whole process is async. So, we have to give the client the change to revalidate if that edit structure that comes is still valid. If it is still valid, the client basically applies it. And by applying these edits to these documents, they will automatically flow back to the server until the client either closes these documents again or saves them. So, the reason being is that some of the tools may even show you a preview. You can only select some of them and apply them. So, there’s always an interaction in these refactorings and to make that possible, as Erich mentioned, the whole protocol is data-driven. We don’t go the server and say, “Okay, do that rename,” and he writes that back to disk. It computes a set of transformations to bring the current state of the workspace into that new state after the refactoring.
CHARLES: I see.
ERICH: [Inaudible] be fully transparent. Actually, no. Refactorings, Dirk [inaudible] refactorings for Eclipse so we can go deep on that. What we don’t support right now in the protocol, we support edits in the buffer but when you want to rename a class in Java, you also want to rename the file. And that’s something we’re currently working on to support in the specification of the language server protocol. So, we don’t have that yet. But we support code actions, quick fixes, that you like from Eclipse probably. And you can use then to do refactorings like extract method, extract constant or extract local variable, things like that you can do at the level of the language server protocol.
CHARLES: Wow. That is…
ERICH: I think [inaudible] right now. Let me go back to the Java thing. The Java language server actually has the support for refactorings. And there is now a language server protocol implementation of this Java provided by Eclipse. So, all the support you had in Eclipse for Java or most of the support is now also enabled in VS Code.
ERICH: [We don’t] really have to reimplement it because you can reuse. And that’s the big thought we have. You want to reuse language smarts as much as possible because they are so hard to implement.
CHARLES: Right. And so, you can do that because you’re providing this abstraction between the tool and the actual smarts, which is really, really cool. I do have to… how do you make it fast? Because you’re describing this tool, this client and this server, and they’re syncing. They’re keeping this distributed state in sync and you know, how do you keep that from coming too chatty? Or is it something that you have to consider? Or is it just, maybe I’m overthinking it because I haven’t dealt with it?
DIRK: So, at the end of the day, it is chatty. But it is made performant in the way that it’s very incremental and partly event-based. So for example, if you type in the document in the editor, you can either decide to [inaudible] sync the full content of the document, which we do not recommend but for some basic exploration, that is something people do. And we have [inaudible] the delta-encoded mechanism. So, we sync the buffer once and then after that you only get the edits the user does. These are chatty of course since the user types them, we debounce them and collapse them on the client side and only send them if we know that the server really needs to know them because we have another request we are asking the server or after a certain timeout. So, there are smarts behind it. But the protocol is kept performant by making it an incremental protocol at the end of the day, and not sending too much data back and forth.
ERICH: Right. We don’t serialize ASTs. We serialize positions, a list of items for completions. And actually, the transport is just JSON RPC.
ERICH: And actually, someone, there is different usage now for language server protocol. And there is one host, Eclipse J, which brings it again back to Eclipse. They actually run language servers remote.
ERICH: And if you use it, you can run it on the browser, you get IntelliSense, and of course I guess it depends on how far away you are from the server. But it seems to work, according to feedback we’ve heard.
ERICH: The feedback we heard from them [is pleasant]. So, they use many of the language servers.
CHARLES: So, is this a product that they have where the language server is running in the cloud and you send – your entire codebase essentially goes over to the language server and you can export the smarts to the cloud?
ERICH: It’s one step at a time. So, Eclipse J is kind of, they have what they call cloud workspace, which means the workspace is in the cloud. And [inaudible] code smarts of the workspace in the cloud, they can run the language servers in the cloud. It’s a [inaudible]. One user has one workspace, has one language server.
CHARLES: That sounds amazing. And if they can make it performant.
ERICH: We have done cloud IDEs, right? If you look at the history from Visual Studio Code, you also had our stuff running in the cloud at some point. That’s how we started. Before we pivoted to VS Code, we built – our exploration was, that’s why the project is six years old. The first two years, we explored how far you can get coding done in the browser.
ERICH: And we had some [inaudible] there.
CHARLES: So, I’ve played around with a lot of cloud IDEs and I’ve found them to be neat, because every few years it comes along. But yeah, it does seem that there are certain challenges that it’s nice to have a client running and just be able to have the files locally. And is that a performance thing or if VS Code is written in TypeScript, theoretically it could run in a browser, right?
ERICH: Of course. The [inaudible] there still runs in the browser. Then it’s used by many tools that run in the browser. Like actually, if you want to edit your source code in the browser, there it’s using the same editor that’s running VS Code. So, that’s how we started. Cloud IDEs, yeah we were at this point. We had our cloud IDE. We could edit websites in the browser, source control them, have a command line, deploy them. What we found is it’s great for some scenarios like code reviews or doing small tweaks to files. But when it comes to really development, you use so many other tools. And you want to just have them. And [inaudible] a long tool chain problem. So, as a developer, you just want to use other tools as well. And that’s why you can’t have them all in the cloud.
ERICH: And [inaudible] we said at some point, it was a great lesson we had that you can program in the browser. But now we want to go to have a really [seven by 24] coding, you want to have a desktop experience. So, what we then did, we moved over the code we had run in the browser using a shell, the Electron shell, and can run it on the desktop.
CHARLES: But there’s theoretically, you could be running your language server for example in the cloud, but everything else on the desktop.
ERICH: Yeah. Some people do that.
CHARLES: Okay. Wow. It’s crazy. It’s heady stuff. We’ve talked about the barrier to implement the code smarts is much lower than it has been in the past. What kind of proliferation of code smart tools are there now that implement the language server protocol? Like how many different languages would you say have airtight…?
DIRK: So now, [inaudible] time where we don’t count anymore. You tell us a language and I can look it up, whether it’s supported. Tell me a language and I can tell you whether – no, we have a website.
DIRK: And when I look at it, we have about 40 languages.
CHARLES: Wow. That’s probably about, pretty much every mainstream language.
DIRK: Yeah. I cannot find what isn’t there.
CHARLES: Yeah. It almost kind of begs the question, is this going to be the new bar for a language? Because I remember when I was starting out, really you just needed to have some interpreter or some compiler to have “a language”. And nowadays, it’s not just the language. You need to have a command line tool for managing your dependencies. And you need to have a package system with a public repository where people can publish reusable units of code. And what’s become expected out of a language to succeed has upped. Is having a language server implementation going to be part of the bar, the new bar, for “Hey, I’m thinking about creating a language”? I haven’t really arrived until I have a package manager, I have a command line for resolving dependencies, I have documentation, and I have a language server.
DIRK: I personally think that is our dream at the end of the day, to get there. We know about languages that do so. So, a lot of these language servers come for example from the people that developed the language. For example, the WASP guys, they do the compiler and they actively work on their language server as well. So, at the end of the day, the advantage of that approach since the WASP language server is written in WASP and runs in WASP, they can reuse so much code that they already have written in WASP. That’s easy for them to package that up in the server and basically the people that maintain the compiler, at least the same team, maintains the language server at the end of the day.
ERICH: And that’s why we call [those] a win-win for the language provider. Because if you implement the language server using the language server protocol, then it can be integrated easily by the tool provider. And it’s a win for the tool provider since there is a common protocol across all these languages you have to support. You can write an implementation once and again benefit and support many different languages, which makes the matrix problem one language support for each tool into more a vector, right? It reduces the matrix into a vector. You only write language servers that get integrated into different tools.
DIRK: And [inaudible] especially I think appealing for new languages that come out, because it lowers the bar for them to get into existing tools. Because if they write a language server speaking the language server protocol integrating that at the end of the day in Visual Studio Code is basically packaging up an extension for Visual Studio Code and writing 20 lines of code.
DIRK: And same [inaudible] for other IDEs that exist where people implemented the language protocol client side for the tool, for example. For vim or for Atom.
DIRK: So, new languages I think definitely, we see that trend go onto the language server protocol because that gives them an entry point into a large tool community.
DIRK: I think we already touched that at the beginning, the appealing stuff of the LSP is that it’s not talking about the programming language itself. It’s talking about things I can do with source code. For example, requesting code complete, go to definition, find all references. And the data that flows between the client and the server is not in terms of the programming language itself. It’s about editor abstractions. We talk about documents and positions. We talk about edits that are applied to documents. We talk about snippets and stuff like that. And these abstractions, since they are programing-language-neutral, are a lot easier to implement for different editors. And the [inaudible] where the [inaudible] would speak AST nodes and symbols and functions and classes and methods, that at the end of the day, would not work. Because if I ask go to definition, the result is not a function or a variable definition. It’s simply a position in the document with a hint which range to select.
CHARLES: Okay. Yeah.
ERICH: [Inaudible] places. In only a few places, we have to really abstract across languages. Like for instance, completions. When you do completions, you don’t know, is it a variable? Is it a function or a method? That’s where we have to abstract. But that’s one of the few places. But again, it’s an enumeration.
DIRK: Yeah. And that’s only to present an [icon].
DIRK: It’s only to give you a nice icon in front, because when you insert it, what comes back for completion item is basically a textual edit or a bunch of textual edits that when you select that completion item, we take these edits and apply them to the document buffer. And whether you edit a functional programming language or some other stuff, Prolog or whatsoever, it does not matter at the end of the day.
CHARLES: Yeah. That simplicity, and treating it at that simple of a level is what unlocks all those superpowers.
ERICH: It unlocks lowering the bar. But of course, if you look at some [of the demands], refactorings, whatever, they cannot easily be funneled. Not all of them can funnel to this low-level abstraction. Then of course, the criticism of the LSP protocol is that if you have already a very rich language service, you might not get it all through the LSP.
DIRK: That’s true.
ERICH: And the [inaudible], that criticism we see of the LSP. But it’s a tradeoff, like so many things in software.
DIRK: Yeah. But what we learned there looking at different types of refactorings, it’s more the set of input parameters that vary much between languages. The result of a refactoring can for every programming language that is at least document-based, [inaudible] in that lingo the LSP speaks. Because at the end of the day, it’s textual edits to a document, right?
ERICH: So, many people like LSP but there are people that don’t like it. And people that have rich language services like IntelliJ, [Cool Tool], and [inaudible], even with LSP we would only get 20% of [our cool] features. Which is a little bit downgraded and not really true. But you see, it’s a tradeoff.
ERICH: And if you want to [inaudible] language available broadly, I highly recommend it packaged as a language server. Your chances that it gets used, supported by different tools, is much higher than anything else.
CHARLES: Right, right. So, it’s kind of like, what’s the UNIX thing? The universal text interface and how it seems counterintuitive but it actually just means you can literally compose anything. Because so few assumptions are made.
ERICH: I would just recommend, [inaudible], go to the website that we have about the language server protocol. I’m pretty sure it will be in the introduction or whatever. It’s microsoft.github.io/language-server-protocol and then you see the implementations, all the implementation of languages, who integrates language servers, and also what kind of libraries are available, if you want to implement your language server.
DIRK: And a full specification.
ERICH: And the specification is there as well. Yeah.
CHARLES: Yeah. If you want to go ahead and do it yourself. Well, thank you so much, Erich. Thank you so much, Dirk, for coming on the show to talk about the language server protocol. It’s very exciting to me and I think it’s exciting for development in general because I just think by having – even if it’s 20, 30, 50% code smarts for ever single language, just the billions and billions of hours that you are going to save developers over the next, over the coming years, it’s a great feeling to think about. So, thank you for all your work and thank you for coming on the show.
ERICH: You’re welcome.
DIRK: Yeah. It was fun talking to you.
ERICH: Yeah. [Inaudible]
CHARLES: Yeah. If people want to continue the conversation, is there a good way that they can get in touch with you?
DIRK: Usually GitHub Issues. So, where the language protocol is, it’s a project on GitHub. Simply find issues. We accept pull requests. I think that’s the way we communicate.
CHARLES: Awesome. Again, if you want to get in touch with us, you can get in touch with us at email@example.com or you can reach out to us on Twitter. We’re @TheFrontside. So, thank you everybody for listening. And we will see you next time.
124 episodes available. A new episode about every 0 hours averaging 44 mins duration .