Internet Explorer is not a supported browser for TI.com. For the best experience, please use a different browser.
Video Player is loading.
Current Time 0:00
Duration 31:10
Loaded: 0.53%
Stream Type LIVE
Remaining Time 31:10
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected

Good morning and welcome to today's Electronic Design webcast. Our topic today-- an introduction to the new TI Clang Compiler sponsored by Texas Instruments. I'm David Maliniak with Endeavor's Design and Engineering group. To begin, let me explain how you can participate in today's presentation.

First, if you have any technical difficulties during today's session, simply type your issue into the Ask a Question box. And a member of our team will assist you. You can also click on the question mark Help button in the upper right corner of the screen. Additionally, we welcome your questions during today's event.

We will answer as many questions as possible during the Q&A session that will follow the main presentation. But please feel free to send in your questions at any time. To do so, simply type your questions into the Ask a Question box and click on the Send button.

If you would like a copy of the presentation, please click on the Event Resources button on the side of your screen to download the presentation. Also, please be aware that today's session is being recorded and will be available on the Electronic Design website within the next week. You'll be notified by email when the archive is available.

Now, let's meet today's speaker. George Mock is an application engineer with a specialization in co-generation tools for the software development technology organization at Texas Instruments. His areas of expertise include compilers, assemblers, and linkers. He has over 30 years of experience in software development and currently focuses his efforts on solving challenging customer problems and writing application notes.

He is the moderator on the compiler part of et.ti.com, which is TI's public forum for customer questions and discussion. Now, let me turn things over to our presenter. George, the floor is yours.

Thank you, David. And my thanks go out to everyone watching today and everyone who later watches the recording. I'm confident you will view this as time well spent.

So here's the agenda for today's talk. The background-- speaking at a high level, what's going on. Compiler elements are where I introduce the parts that make up the compiler. I know that sounds surprising. It will make sense when I'm done.

Flash memory savings is where I make the point that by building with tiarmclang, you end up using less flash memory than before. Compatibility-- there are two areas of compatibility to discuss. One is compatibility with the existing ARM compiler from TI. And the other is compatibility with GCC ARM compilers.

The safety compile the qualification kit is of interest to those of you who build systems where functional safety is important. And code coverage is a feature of tiarmclang that helps you understand how well your code is being tested. So what's the background here?

Well, tiarmclang replaces armcl where armcl is the existing proprietary compiler from TI. This replacement happens over time. We're not setting a date in stone and past that date, no more armcl. That's the opposite of what we are doing. We're going to take care of our customers in this regard. We understand their situation.

And therefore, both compilers are presently supported. And that will continue to be the case for as long as necessary. All that being said, we do have some customers who've made the change. And they report us that migrating from armcl to tiarmclang is a smooth and quick process. And I'll be talking about that in a bit more detail about 10 to 15 slides from now.

So moving on to compiler elements-- the parts of a compiler. So I start here. This is probably how most of you think about a compiler right now. And why shouldn't you? Why should you be an expert in these things?

You think about you've got your source code. You push it through either a GCC or the armcl compiler. Magic happens and then you get object code or an executable out on the other end. Well, let's expand on that just a little bit.

So now we've expanded it. And we're using the tiarmclang compiler. Again, you're C and C++ source code is going into the compiler. But now let's talk about the first box, the clang front end.

I'm sure we've all had the fun experience of a diagnostic message like, line 173 missing semicolon. Well, that comes from the front end of the compiler. That's one of its jobs.

It also lowers the representation of the program into an intermediate form that is operated on by the second box, the LLVM back end. And it makes multiple passes over that intermediate representation. At the end, it produces assembly code.

That's processed by the third box, the LLVM arm assembler. The assembler turns it into object code, which is an input to the linker. Additional input to the linker, as you see them over on the right, those represent the runtime support libraries that come with the compiler. And then the linker combines those all together to form the final executable image.

Now, I'm sure you've noticed a color scheme here. What's going on? All right, so the pieces in blue come from something called LLVM/Clang, which raises the question, well, what's that? So here's some background on LLVM/Clang. LLVM is an open source project.

It provides building blocks for a compiler. It supports many programming languages, including C and C++. Clang is another open source project that's a sub-project of LLVM. And it's the C/C++ front end for the toolchain. Remember the front end? We just went over the part that gives you the fun diagnostics and all that.

Clang is the front end for the LLVM compilers. To learn more about both of these projects, please visit these links. Anything that's in red font-- well, red ink, red color that's underlines, that's a link that you get to click when you download these slides later.

In common usage and throughout the rest of this presentation, the term Clang refers to both projects at once. So what are some of the benefits of using Clang? Well, it's got a large investment from industry and academia-- Apple, ARM, Google, Microsoft, and other similar companies have been using Clang for quite some time now.

It began at the University of Illinois. And it's gone on from there to lots of different universities. There are a large variety of features that can be adapted to TI devices. C++ 17-- this is how TI is picking up support for C++ 17. We're using Clang to get that done.

Code coverage is a feature we're going to cover later. This is a fast compiler. Low memory use refers to it doesn't use a lot of memory on your host machine when you're doing the build-- very good diagnostics and on and on.

It's used in production ARM compilers from other vendors, including ARM limited since 2014. GCC compatibility is a feature of Clang that's been there since the very beginning. And this makes it easy to compile open source software onto TI devices. These benefits only increase as Clang improves and grows.

And to that point, TI has made quite the investment to make it easy for us to incorporate the changes that the open source community makes in Clang into our development environment. And then when we release it to you, you're getting-- in that release are changes that were made by the open source community in Clang only a short time before. And this will continue to be the case release to release to release.

So the elements in this diagram that are in red are from TI. So this includes the linker and a portion of the runtime support library. So this naturally raises the question, well, why did you do that?

I like to answer that question with examples. And here's one example I'd like you to consider. So that line of source code is creating a table in initializing with some constants. And some questions I'd like you to think about a little bit are, where are the constants stored? And how do they copied into global table?

Well, unfortunately, I don't have time to break down all the details of those questions. But I can tell you that it's implemented by a combination of the compiler, linker, and the startup code in the runtime support libraries. What's interesting about the TI implementation is that it compresses the constants in memory. So they use up less space than otherwise. And then their decompressed when they're copied into the location a global table.

So this is the kind of thing that happens when you've got a development team that's been building compilers for embedded processing for more than 30 years. We come up with ideas like this. And we implement them and deploy them to our customers. And they benefit from it.

When we started up on tiarmclang, this is among the features that we just couldn't bring ourselves to leave behind. We really wanted to include this because we knew it was such a benefit to our customers. So we did all the work to figure out how to integrate all this together and make a product that is of such benefit to our customers combining the benefits of playing with our experience over 30 years of doing [INAUDIBLE] processing.

As the title of the slide implies, there are other examples I would love to go through. But in the interest of time, we need to move on. With any luck, we'll be able to come back to this topic and maybe get to those other examples.

So at this point, I'm hoping I've made it clear that from the look and feel of this compiler, from the viewpoint of the C and the C++ source code, it's a Clang compiler. But from the viewpoint of linking, it's just like the old TI toolchain. Moving on to flash memory savings-- I like to represent that with this table, which might be a little bit confusing at first. So let's walk through the first row of the table and give you an idea of what this represents.

So the first row says when I compare tiarmclang clang with armcl-- the proprietary compiler from TI-- and I use both of those compilers to build a code base that goes by the name M0 plus SDK, which is the code base that is built for cortex M0 plus CPU, the amount of flash memory required by tiarmclang is 10.9% smaller than the amount of flash memory required by armcl, and then so on for the rest of the table. You'll see that we're comparing tiarmclang against armcl and GCC on three different bases, two different CPUs. And then the numbers range from 10.9% savings to 0.7% savings.

And the next few slides, I'm going to give some more details on what each column stands for. So let's start with the first column, tiarmclang versus some other compiler. So these are the versions of the compilers that were used in this-- to create this data.

They're in red, which means those are links. You can go download the compiler from those locations. Critical options are the options that we use that had the most to do with reducing the amount of flash memory required. And the reason I wanted to show that is we tried as best we could to be as fair as we could to each of the compilers.

We didn't somehow gain the options so that we look good and our competitors look bad or something like that. We're trying to be as honest and as transparent about this as we can. So the code base-- the second column. So this is what collection of source code was used for that particular measurement on that row?

The acronym SDK stands for the Software Development Kit. One of them is an SDK for an M0 plus. The second one goes by the name coreSDK. That's not a standalone product that you can go to the website and download. It turns out there's about give variants of the SDK for SimpleLink. And each of those has this common core of examples.

And we extracted that. And it goes by the name coreSDK. And we use it for this measurement here. And then the last one, SDK Thread-- now, that is from a specific SimpleLink SDK.

The version's right there. It's from a specific platform within that SDK. And this Thread examples for that platform-- those examples use OpenThread to do things on a Thread network.

The third column is the CPU that we're building the code base for. That's pretty straightforward. But this is a good time to show you which of CPUs are supported by tiarmclang. This is the full list.

And then the last column-- the savings. How much of last flash memory is being used? OK, so each code base supplies a large number of examples. And these examples are complete programs. They typically demonstrate one feature from the code base.

Flash usage refers to how much flash memory is required to contain the code and constant data for the example. So this means we're ignoring things that live in RAM, such as the stack and the heap and other things. At the bottom of the slide are the size commutation details. I'm not going to go through that right now. But I do want to include that on this slide for those of you who may be skeptical about the numbers.

I understand the skepticism. I don't blame you. That's why I want to be as transparent as I can about this and show you exactly where the number comes from.

And so at this point, I hope I've made the point that when you do this similar exercise yourself, you take your code, you build it with your current compiler, get a size number for how much flash is being used, and then you do the same thing with tiarmclang, you're going to see a reduction in the amount of flash saving required somewhere in the range of 1% to 10%. So compatibility-- there are two areas of compatibility to be concerned about. One of them is compatibility with the current compiler from TI. It goes by the name armcl.

So one of the things you'd want to be concerned about is make some updates to your source code to make it tiarmclang friendly. So it turns out tiarmclang doesn't support some of the predefined macros-- excuse me-- pragmas, and intrinsic that are supported by armcl. And what we recommend you do in these cases are that you change those to use something called ACLE, which is the ARM C Language Extensions or the GCC extensions.

They do the same thing. They're supported by both armcl and tiarmclang. So what's nice about this is that you can work through this problem one piece at a time. You don't have to do the whole thing and then build one time and hope for the best.

Again, we're the total opposite. So to take an example, you could change all your pragma code sections into using the GCC function attribute section instead. You could do that-- and all of your source files and then build it with armcl, test it the way you're testing it right now, and then move on to the next thing. Or maybe you want to work a file at a time-- however you want to do it.

So that's how we recommend you go about the source code changes that are necessary. What about the compiler options? Those don't match either. So that's something to be concerned about.

If you build with CCS, you're in a really good place because CSS provides automated assistance. And it ends up being a pretty straightforward process to get from the armcl option set to the tiarmclang option set. Now, if you build with make files, you are going to have to update the make files.

But again, you're not being left alone. The link at the bottom of this slide, you can go click there. There's the migration guide. And it goes through every armcl option in detail telling you exactly what the story is for that option.

So you'll be able to work through this pretty quickly. It ends up being a pretty straightforward experience. Now, ti compatibility dot h is a header file that is supplied with tiarmclang. This is for quick experiments.

If you find yourself in a situation where you want to quickly build a few files or a small program or whatever and you don't want to take the time to go visit all your intrinsics and so on, you can include this one header file and build and start experimenting. That said, we do not recommend TI compatibility dot h as a long-term solution.

So another area of compatibility to be concerned with is GCC compatibility. So the good news here is that Clang was designed to be compatible with GCC from the very beginning, which is quite the benefit for porting open source and general computing projects to TI devices. With regard to the command line interface, every time they introduce an option into Clang, they took a look at how GCC does it.

And they'll do it differently only if they have a really, really good reason. So for example, dash capital O means optimization with both compilers-- and so on and so forth for several of the other options. The ones that you typically see when you're building with these compilers, those are the ones that are most likely to be the same.

With regard to language extensions, most new language extensions are supported in tiarmclang. The assembly language syntax that is used is the one that is used by the GNU assembler-- so the directives and that sort of thing. Those are all inherited from GNU.

And with regard to asm statements, tiarmclang supports the same as statements that GCC ARM compilers support. Now, with regard to GCC linker scripts, those do need to be converted to TI linker command files. However, there's an article that's already available. Click the link and you're there. And it'll walk you through that process.

Moving on to the safety compiler qualification kit. So this is for those of you who are building systems where functional safety is an important consideration. So this kit enables you to ensure that your use of tiarmclang complies with functional safety standards for industrial, automotive, railway.

You get the flexibility-- using a kit like this, among the advantages are you get the flexibility to use all of the tiarmclang options and features. There's nothing declare it off limits here. You can use whatever you want.

It's free of charge. And it's available for download right now. You can go visit that link right now and get this qualification kit. There's also an article about it with more details.

My honest advice here is if this is you, if you're concerned about this, just come download it. And read through the user manual there. Scan through the user manual there. And start trying to do it for, I don't know, 15 to 30 minutes, something like that.

And you won't finish in that short amount of time. But you will get a really, really good idea of what's involved. And everybody who's done this-- or close to everybody who's done this-- comes away with a, oh, OK. This isn't so bad. This makes sense. So that's my honest advice here.

So code coverage is a feature of tiarmclang compiler worth getting into just a little bit. So there's about five steps involved in carrying out this process. And I unfortunately don't have time to walk you through all those steps. But what I am showing you is the final result.

You get a good look at what it looks like at the end. And that gives you a sensor what's going on here. So look at the screenshot on the upper left.

That's an HTML file that's the original source code with some extra information added and coloring and stuff. The first column, the line column, is the line number of the source code. The second one is the count-- how many times each of those lines ran.

And any zeros there, which show up in red, are a point of concern. This says, you probably need to go change your testing so that code gets exercised. OK. And then if you look at underlying 10, you see branch appears times and true and false.

But that's showing you for each of those conditions in that statement, which of those conditions was executed for the true case and the false case. It breaks it down both ways for you. And again, any zeros here are a point of concern and an indication that you probably need to go change your testing so that these branches are executed both ways.

Now, with regard to the screenshot on the upper right, it's the same information. But it's in a simple text file. And then the screenshot at the bottom is a coverage report. And this is something that's typically of interest to those who are building applications where functional safety is important.

And it shows you-- for instance, function coverage is what percentage of my functions were run at least once. Line coverage is which of my lines were run at least one time. Region is a concept that I unfortunately don't have time to break down in a lot of detail. But speaking very briefly, it's a portion of the code for which there's one entrance and one exit.

And so what percentage of those were executed at least once? In then branch coverage is, what percentage of your branches were executed were the true case and the false case? And the whole game here is to try to get these numbers as high as you can. And by doing that, you've improved how well you've tested your code.

So here's a slide with a few references. The first three have already appeared in the presentation already. Where do you download the compiler? Where is the users guide and the qualification kit? And then the last one is a link to e2e.ti.com.

This is where you go to post a question on any TI product, including tiarmclang. And it turns out compiler related questions are answered by yours truly-- me here. I'll answer those questions. So if you have any question about this presentation or any compiler question, please go to that link and post your question. And you're effectively talking directly with me.

So that completes the prepared material. And now it's time for questions and answers.

Very good. Thank you, George. Excellent presentation.

Some of you have already submitted questions. So we'll jump into those. If you would like to submit a question now, please type it into the Ask a Question box and hit the Send button.

Also, please take a moment to complete the feedback form that will appear on your screen at the end of the webinar. All right. George, could you please elaborate on why the linker is based on the TI tool toolchain instead of the LLVM toolchain?

So it just so happens that I have additional slides on this topic. Why did we use TI linker and runtime support libraries? So here's another example.

So remember, the previous example talked about compressing the constants and the initialization of the global table. Here's another example. This one depends mostly on-- well, it depends on the linker and the runtime support libraries.

What the compiler does is, as another way to reduce code size, is it keeps kind a running track-- on every time you call printf, what were the format specifiers that were used-- your percentage S, percentage D and so forth. Which ones were used? And if you only use the really simple-- percentage S, percentage D, and you don't use field width and so on and so forth, then what the compiler will do is it'll choose to use this TI printf minimal, which is a variant of printf that has, well, minimal capabilities-- the ability to only print strings and integers and fairly simple things.

And that implementation of printf, because it is less capable and that's all the capability that's needed by this particular instance of the program, takes up a lot less memory. And there are two other variants.

If you don't use floating point numbers there's-- but you do use field width and so on, you end up with that one-- TI printfi nofloat. And then if you use some of the really fancy everything-in-the-kitchen-sink features and printf, then you end up with the full version. To flesh this out a bit-- in the core SDK, one of the benchmarks just from this one feature alone saw a reduction of about 5.1%

So this is another example of something that a people working on compilers for embedded processing for a really long time-- they come up with things like this and implement them. And to be able to integrate that into tiarmclang, we needed to use the TI linker and a portion of the TI runtime support library.

Very good. Another question is, doesn't code coverage depend upon unit testing?

Doesn't code coverage depend upon unit testing-- I mean, the point of code coverage is to show you how well your tests are actually exercising all the little pieces of your code. There isn't some section of your code that never gets executed. And therefore, you haven't proven that it works correctly.

Unit testing-- well, what code coverage could tell you here is that you're missing some unit test. Unit testing is testing some particular port-- one or a sub-part of a feature of your system. And so if there's some part of your system you're not testing at all, code coverage would expose that to you.

Another question is, does the TI linker include link time optimization like the LLVM toolchain does?

Tiarmclang-- the current release of it does not support link time optimization. But that's a work in progress. And we expect to have that in release very soon.

OK, fair enough. I think you may have addressed this. But some people may need to hear it?

How long will TI support the armcl compiler.

For as long as the customers need us to. We're going to listen carefully to our customers in this one through our field network as well as direct contact. And you can post on the floor about it. You can say, I need to use the armcl up to this certain date. Is that OK? We'll answer that question.

So, I mean, we're very sensitive to the situation our customers are in. If you're in the middle of a project that's nearing completion and it uses armcl, you're in good hands. You're going to be fine.

The answer basically is that you're fully engaged with the customers. And you'll support what they need. Probably time for one last question. What are some good reasons to use code coverage?

So I do have a float on that-- I mean, a float. Wonder what a float is. I have a slide on that.

It's about-- one of the principal reasons to get into code coverage is to try and expose latent bugs. It's all about trying to find those portions of your code that you thought you were exercising but it actually turns out you're not. Or you're exercising them so seldom that you're worried that it's not working in every circumstance. So that's the big reason you want to get into code coverage.

OK, well, I think we're just about out of time for today. For questions we didn't have time to get to, we'll respond to those via email. And that concludes today's presentation. On behalf of Electronic Design, I'd like to thank Texas Instruments for sponsoring today's event and, of course, all of you for joining. Have a great rest of the day.

Thank you.