What is the difference between arm and thumb mode




















The ARM instruction set is a set of bit instructions providing a comprehensive range of operations. ARMv4T and later define a bit instruction set called Thumb. Most of the functionality of the bit ARM instruction set is available, but some operations require more instructions. The Thumb instruction set provides better code density, at the expense of performance. ARMv6T2 introduces Thumb-2 technology.

This is a major enhancement to the Thumb instruction set by providing bit Thumb instructions. The bit and bit Thumb instructions together provide almost exactly the same functionality as the ARM instruction set. If you choose to use all features then you're limiting your application to run on all CPUs of the same or higher architecture and with the same set of optional features.

So Cortex is not different apart from the name. If however you want to run an application on a wide range of architectures then you'll have to compile for a subset of the CPU features. It's your decision, so if you use feature X in your application then yes you can't run it on a CPU that doesn't have feature X. The trouble is in your understanding of how architectures and binary compatibility works.

If you can't understand the above then you shouldn't be making wild claims about compatibility and leave it to the experts! You miss the point. It is NOT about a common subset, but about what opcodes MIGHT reside in a library, or hidden in a working system, or in code that customers load to run on that system Perhaps users can say which "Binary Compatible" matters most to them?

Binary compatible at the application level is one thing but binary compatible at the OS level is another. As such they completely different. Now it may be possible to build code that works with either by having seperate low level exception handlers, but thats a bit like saying fat binaries are binary compatible because they have code that runs on one ISA and code that runs on another.

What peole need is confidence that code developed for one system will work on another, which means minimizing the distinct code paths. The solution for this problem is to mark all objects and images with the architecture version so that incompatibilities can be diagnosed.

It'll take the undefined instruction trap as it can't execute ARM instructions. At that point it's up to the OS to decide what to do: it can report an error, kill the current process, or emulate the instruction That is not rigourous but rubbish For any two cores that are not identical you can find a code sequence that runs correctly on one but fails on the other. The architecture defines such sequences to be illegal. However the existence proof of illegal instructions has no effect on binary compatibility precisely because the architecture defines them to be illegal.

The feature set needed by objects X and Y is the union of the features of X and Y. Apply these rules recursively and you can calculate the features needed by any piece of code, the features supplied by a set of CPUs and therefore whether an application is compatible with set of CPUs.

How many people would agree with that statement, do you think? Yes, all existing Thumb-1 code will run fine. Remember that Thumb-1 is a subset of Thumb What exactly is unclear in "Thumbonly"? First silicon is expected soon. No, most of the OS code is still the same, the only differences are at the lowest level eg. This is nothing new and is true for all ARM cores. However this fact has nothing to do with binary compatibility.

Linux manages to run the same binaries on many different CPUs eventhough they need different startup code. So the suggestion that CPUs that need different startup code are not binary compatible is obviously wrong. The code at the lowest level always needs modifications for each CPU, but the amount of such code is typically small.

One of the goals of the new exception model is to reduce the amount of such code further as well as the amount of assembler. I think I'll rest my case here. Users can decide themselves if that presents no problem, or something that needs to be watched in their systems. In excatly the same way as Thumb knows. Code is just numbers, the ARM instruction set is 32bit numbers and it covers all of that space - with some undefined or unpredictable instructions, THUMB instruction set is 16bit numbers, likeways it covers all of that space - with some undefined or unpredictable instructions.

Thumb2 only cores will fault state change instructions instead of switching to ARM state. James Kosin. Reply to author. And the confusion might be actually good as Arm the company wanted them to be similar, a lot of effort went to Unified Assembly Language UAL so assembly files made for ARM could be compiled on Thumb-2 without change.

The situation such be much better compared to Thumb-1 where the ARM assembly has to be more likely rewritten to target Thumb-1, while ARM to Thumb-2 rewrite is less likely to happen. Another difference are the data processing instructions. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. Ask Question. Asked 6 years, 8 months ago. Active 4 months ago.

Viewed 42k times. Improve this question. The answer to that is no. Add a comment. Active Oldest Votes. Oh, ARM and their silly naming Improve this answer. Notlikethat Notlikethat As well, within Thumb2 , there are op-codes that have been added over time. So not all Thumb2 is the same. From the main CPU perspective, there is no mode known as Thumb2 I think that is what you mean by 'official'?

Thank you! This sorts things out for me. I got so much info on ARM. Amazing event! Curently I am assembling using -mthumb parameter and in source file I am using. But could I remove. Documentation is not perfectly clear about that Cortex M4 is armv7e-m — ConsistentProgrammer.



0コメント

  • 1000 / 1000