Compilation targets

Since the inception of computers there is an endless evolution, progress and advancements to use technology for more things, in more areas, in ways previously only the fan-fiction writers have imagined. The improvements of the hardware and especially the software have placed a basis to create things with very very high levels of abstraction. In a minimal time with little source code – and some times even win no code – with today’s tools – knowledgeable enough individuals could create working, useful software and hardware. In a lot of cases that comes with a price – the code is not running at a optimal speed and optimizations. The time saving high level tools come with computing overhead. But that replaces – the speed of execution – with a speed of development. The cost of executing the code in the optimal way is too high compared with the speed of development. Money comes when work is done and when the work is finished fast – money will come eventually.

Here are some compilation targets:

Hardware specific platforms and instructions – When you want to get maximum of the hardware, you use software instructions provided by the manufacturer so every single operation counts – and executes useful business logic – not just a virtual machine – inside a virtual machine – inside a virtual machine… Such high optimized platforms are probably the crypto mining hardware and software. The mining equipment is hardware optimized for the mathematical operations used by the cryptographic algorithms. The software passes to the “iron” the matrices of bytes and information and receives the desired output. If the hardware is not optimized – these calculations are done with common – general software instructions – which in most cases is slower.

LLVM – LLVM is an abstraction on the hardware that encapsulates up more useful concepts to the programming language and compiler developers. The code that is going through LLVM is still very very optimal. Probably some hardware instructions are not executed with best performance but in general – the code is with high speed. One reason for this is that LLVM is a compiler infrastructure – the code is transformed after being written to low level machine code, because it is guideline for the compiler, not the execution (interpretation of code) at run-time. The end result is binary or very close to binary. In the run-time – there is little to none code interpretation or running code so the true business logic code could run.

OS specific code and libraries – after abstracting on a hardware – all operating systems have some upper layers with APIs and guidelines that app developers must follow. This could include permission on a file system, limitations on the amount of resources the app could have, limitations on hardware features and on software layers – so an app is unable to brake and trash the OS itself. This includes the process wrapping so an app could access only the memory, the data, that the OS has given, and not the data of another process. And because there are several different OSes, the APIs, the binaries, the recommended best practices are different and this brings variations for the developers – to basically rewrite the same logic, just linking the different OS specific code.

JVM, .NET Runtime, and a-like – To abstract over Windows Forms, Cocoa, GTK, Qt and other UIs – app developers could write Java or lately .Net Core User Interfaces and when there is a Virtual machine for the different OSes – the app could theoretically run on all of them without code change, without recompilation. This is compiling for a higher level virtual machine. The ability to abstract over – memory, operating system specifics, platform specifics, and more – has given us – app developers the base to think, to abstract, to conceptualize in object oriented terms things in very high level – closer and closer to our lives, closer to the things the normal users see, than to the byte order in the memory. This comes with overhead of executing code that executes our/your code/ but – the speed of development wins over the speed of execution.

HTML/CSS/JavaScript/Web-Assembly – The Web has evolved and it is a compilation target. There are several languages that compile to JavaScript – Java(GWT), Dart lang, TypeScript. There is work to recompile the C code to Web Assembly. There are frameworks that try to hide the HTML and CSS part and abstract over components – Flutter, Angular, Blazor, Vaadin. All is done with trans-compilers. And because the JavaScript itself is interpreted in the end – all the frameworks generated code that comes with build in overhead – once again is executed with additional overhead. WebAssembly could save the day but, until then – it seems computing power waste does not matter.

Full-Stack Web Abstractions – Java’s Vaadin, GWT, .Net Blazor. – These are frameworks that try to abstract over the HTTP protocol, that hide for the application developer – what is client, what is server. Because the User Interfaces Components are just Object Oriented APIs/concepts – their real implementation may be whatever. This way – the same “Window”, “Form”, “Button”, “Layout”, “TextBox” and whatever – may be actually – HTML/CSS/JS, but it may be also – a flash equivalent, Android native component, Desktop UI Component or whatever. Blazor uses Xamarin for this binding and I’ve seen GWT use Cordova for wrapping the web code into apps. Of course all the neat features come with overhead – in the compile time and during the run-time – the execution.

Database/Machine Learning/Language Processing Platforms – The big internet platforms have succeeded collecting very very big sets of data. This allows them to create software that teaches itself from the data. The analysis, the possible transformations over the information that could make a software “smarter” are endless. The Artificial Intelligence/Machine Learning are endless fields of – how to extract a lesson from all the information and when something new comes – and have a software that “knows” the new input – as fast as milliseconds – instead of hours, days, months of image, natural language or any type of data set – information processing.

Drag and Drop/Workflow designing platforms – I’ve seen over the years several platforms and even I’ve tried to create some of mine – software packages that try to abstract over programming language specifics and have a user interface that you could wire up visually the business logic. There is no escape of entering into the computer – somehow the business logic – either with writing endless rows of code or by drag and dropping the inputs, the operations, the output location. The software currently cannot code itself entirely. With time – technical people may (or may not) understand that the technical abstractions that help them make good software – does not matter to the end user, does not equals - always - to more money. Technical optimum is important when things scale. Until then – resolving the end user problems – fast – is what is important.

Share
Add comment