fix conflicts

This commit is contained in:
Nguyen Anh Quynh
2022-01-28 10:30:51 +08:00
992 changed files with 411332 additions and 181405 deletions

43
qemu/.editorconfig Normal file
View File

@ -0,0 +1,43 @@
# EditorConfig is a file format and collection of text editor plugins
# for maintaining consistent coding styles between different editors
# and IDEs. Most popular editors support this either natively or via
# plugin.
#
# Check https://editorconfig.org for details.
root = true
[*]
end_of_line = lf
insert_final_newline = true
charset = utf-8
[*.mak]
indent_style = tab
indent_size = 8
file_type_emacs = makefile
[Makefile*]
indent_style = tab
indent_size = 8
file_type_emacs = makefile
[*.{c,h}]
indent_style = space
indent_size = 4
[*.sh]
indent_style = space
indent_size = 4
[*.{s,S}]
indent_style = tab
indent_size = 8
file_type_emacs = asm
[*.{vert,frag}]
file_type_emacs = glsl
[*.json]
indent_style = space
file_type_emacs = python

View File

@ -1,107 +0,0 @@
QEMU Coding Style
=================
Please use the script checkpatch.pl in the scripts directory to check
patches before submitting.
1. Whitespace
Of course, the most important aspect in any coding style is whitespace.
Crusty old coders who have trouble spotting the glasses on their noses
can tell the difference between a tab and eight spaces from a distance
of approximately fifteen parsecs. Many a flamewar have been fought and
lost on this issue.
QEMU indents are four spaces. Tabs are never used, except in Makefiles
where they have been irreversibly coded into the syntax.
Spaces of course are superior to tabs because:
- You have just one way to specify whitespace, not two. Ambiguity breeds
mistakes.
- The confusion surrounding 'use tabs to indent, spaces to justify' is gone.
- Tab indents push your code to the right, making your screen seriously
unbalanced.
- Tabs will be rendered incorrectly on editors who are misconfigured not
to use tab stops of eight positions.
- Tabs are rendered badly in patches, causing off-by-one errors in almost
every line.
- It is the QEMU coding style.
Do not leave whitespace dangling off the ends of lines.
2. Line width
Lines are 80 characters; not longer.
Rationale:
- Some people like to tile their 24" screens with a 6x4 matrix of 80x24
xterms and use vi in all of them. The best way to punish them is to
let them keep doing it.
- Code and especially patches is much more readable if limited to a sane
line length. Eighty is traditional.
- It is the QEMU coding style.
3. Naming
Variables are lower_case_with_underscores; easy to type and read. Structured
type names are in CamelCase; harder to type but standing out. Enum type
names and function type names should also be in CamelCase. Scalar type
names are lower_case_with_underscores_ending_with_a_t, like the POSIX
uint64_t and family. Note that this last convention contradicts POSIX
and is therefore likely to be changed.
When wrapping standard library functions, use the prefix qemu_ to alert
readers that they are seeing a wrapped version; otherwise avoid this prefix.
4. Block structure
Every indented statement is braced; even if the block contains just one
statement. The opening brace is on the line that contains the control
flow statement that introduces the new block; the closing brace is on the
same line as the else keyword, or on a line by itself if there is no else
keyword. Example:
if (a == 5) {
printf("a was 5.\n");
} else if (a == 6) {
printf("a was 6.\n");
} else {
printf("a was something else entirely.\n");
}
Note that 'else if' is considered a single statement; otherwise a long if/
else if/else if/.../else sequence would need an indent for every else
statement.
An exception is the opening brace for a function; for reasons of tradition
and clarity it comes on a line by itself:
void a_function(void)
{
do_something();
}
Rationale: a consistent (except for functions...) bracing style reduces
ambiguity and avoids needless churn when lines are added or removed.
Furthermore, it is the QEMU coding style.
5. Declarations
Mixed declarations (interleaving statements and declarations within blocks)
are not allowed; declarations should be at the beginning of blocks. In other
words, the code should not generate warnings if using GCC's
-Wdeclaration-after-statement option.
6. Conditional statements
When comparing a variable for (in)equality with a constant, list the
constant on the right, as in:
if (a == 1) {
/* Reads like: "If a equals 1" */
do_something();
}
Rationale: Yoda conditions (as in 'if (1 == a)') are awkward to read.
Besides, good compilers already warn users when '==' is mis-typed as '=',
even when the constant is on the right.

641
qemu/CODING_STYLE.rst Normal file
View File

@ -0,0 +1,641 @@
=================
QEMU Coding Style
=================
.. contents:: Table of Contents
Please use the script checkpatch.pl in the scripts directory to check
patches before submitting.
Formatting and style
********************
Whitespace
==========
Of course, the most important aspect in any coding style is whitespace.
Crusty old coders who have trouble spotting the glasses on their noses
can tell the difference between a tab and eight spaces from a distance
of approximately fifteen parsecs. Many a flamewar has been fought and
lost on this issue.
QEMU indents are four spaces. Tabs are never used, except in Makefiles
where they have been irreversibly coded into the syntax.
Spaces of course are superior to tabs because:
* You have just one way to specify whitespace, not two. Ambiguity breeds
mistakes.
* The confusion surrounding 'use tabs to indent, spaces to justify' is gone.
* Tab indents push your code to the right, making your screen seriously
unbalanced.
* Tabs will be rendered incorrectly on editors who are misconfigured not
to use tab stops of eight positions.
* Tabs are rendered badly in patches, causing off-by-one errors in almost
every line.
* It is the QEMU coding style.
Do not leave whitespace dangling off the ends of lines.
Multiline Indent
----------------
There are several places where indent is necessary:
* if/else
* while/for
* function definition & call
When breaking up a long line to fit within line width, we need a proper indent
for the following lines.
In case of if/else, while/for, align the secondary lines just after the
opening parenthesis of the first.
For example:
.. code-block:: c
if (a == 1 &&
b == 2) {
while (a == 1 &&
b == 2) {
In case of function, there are several variants:
* 4 spaces indent from the beginning
* align the secondary lines just after the opening parenthesis of the first
For example:
.. code-block:: c
do_something(x, y,
z);
do_something(x, y,
z);
do_something(x, do_another(y,
z));
Line width
==========
Lines should be 80 characters; try not to make them longer.
Sometimes it is hard to do, especially when dealing with QEMU subsystems
that use long function or symbol names. Even in that case, do not make
lines much longer than 80 characters.
Rationale:
* Some people like to tile their 24" screens with a 6x4 matrix of 80x24
xterms and use vi in all of them. The best way to punish them is to
let them keep doing it.
* Code and especially patches is much more readable if limited to a sane
line length. Eighty is traditional.
* The four-space indentation makes the most common excuse ("But look
at all that white space on the left!") moot.
* It is the QEMU coding style.
Naming
======
Variables are lower_case_with_underscores; easy to type and read. Structured
type names are in CamelCase; harder to type but standing out. Enum type
names and function type names should also be in CamelCase. Scalar type
names are lower_case_with_underscores_ending_with_a_t, like the POSIX
uint64_t and family. Note that this last convention contradicts POSIX
and is therefore likely to be changed.
When wrapping standard library functions, use the prefix ``qemu_`` to alert
readers that they are seeing a wrapped version; otherwise avoid this prefix.
Block structure
===============
Every indented statement is braced; even if the block contains just one
statement. The opening brace is on the line that contains the control
flow statement that introduces the new block; the closing brace is on the
same line as the else keyword, or on a line by itself if there is no else
keyword. Example:
.. code-block:: c
if (a == 5) {
printf("a was 5.\n");
} else if (a == 6) {
printf("a was 6.\n");
} else {
printf("a was something else entirely.\n");
}
Note that 'else if' is considered a single statement; otherwise a long if/
else if/else if/.../else sequence would need an indent for every else
statement.
An exception is the opening brace for a function; for reasons of tradition
and clarity it comes on a line by itself:
.. code-block:: c
void a_function(void)
{
do_something();
}
Rationale: a consistent (except for functions...) bracing style reduces
ambiguity and avoids needless churn when lines are added or removed.
Furthermore, it is the QEMU coding style.
Declarations
============
Mixed declarations (interleaving statements and declarations within
blocks) are generally not allowed; declarations should be at the beginning
of blocks.
Every now and then, an exception is made for declarations inside a
#ifdef or #ifndef block: if the code looks nicer, such declarations can
be placed at the top of the block even if there are statements above.
On the other hand, however, it's often best to move that #ifdef/#ifndef
block to a separate function altogether.
Conditional statements
======================
When comparing a variable for (in)equality with a constant, list the
constant on the right, as in:
.. code-block:: c
if (a == 1) {
/* Reads like: "If a equals 1" */
do_something();
}
Rationale: Yoda conditions (as in 'if (1 == a)') are awkward to read.
Besides, good compilers already warn users when '==' is mis-typed as '=',
even when the constant is on the right.
Comment style
=============
We use traditional C-style /``*`` ``*``/ comments and avoid // comments.
Rationale: The // form is valid in C99, so this is purely a matter of
consistency of style. The checkpatch script will warn you about this.
Multiline comment blocks should have a row of stars on the left,
and the initial /``*`` and terminating ``*``/ both on their own lines:
.. code-block:: c
/*
* like
* this
*/
This is the same format required by the Linux kernel coding style.
(Some of the existing comments in the codebase use the GNU Coding
Standards form which does not have stars on the left, or other
variations; avoid these when writing new comments, but don't worry
about converting to the preferred form unless you're editing that
comment anyway.)
Rationale: Consistency, and ease of visually picking out a multiline
comment from the surrounding code.
Language usage
**************
Preprocessor
============
Variadic macros
---------------
For variadic macros, stick with this C99-like syntax:
.. code-block:: c
#define DPRINTF(fmt, ...) \
do { printf("IRQ: " fmt, ## __VA_ARGS__); } while (0)
Include directives
------------------
Order include directives as follows:
.. code-block:: c
#include "qemu/osdep.h" /* Always first... */
#include <...> /* then system headers... */
#include "..." /* and finally QEMU headers. */
The "qemu/osdep.h" header contains preprocessor macros that affect the behavior
of core system headers like <stdint.h>. It must be the first include so that
core system headers included by external libraries get the preprocessor macros
that QEMU depends on.
Do not include "qemu/osdep.h" from header files since the .c file will have
already included it.
C types
=======
It should be common sense to use the right type, but we have collected
a few useful guidelines here.
Scalars
-------
If you're using "int" or "long", odds are good that there's a better type.
If a variable is counting something, it should be declared with an
unsigned type.
If it's host memory-size related, size_t should be a good choice (use
ssize_t only if required). Guest RAM memory offsets must use ram_addr_t,
but only for RAM, it may not cover whole guest address space.
If it's file-size related, use off_t.
If it's file-offset related (i.e., signed), use off_t.
If it's just counting small numbers use "unsigned int";
(on all but oddball embedded systems, you can assume that that
type is at least four bytes wide).
In the event that you require a specific width, use a standard type
like int32_t, uint32_t, uint64_t, etc. The specific types are
mandatory for VMState fields.
Don't use Linux kernel internal types like u32, __u32 or __le32.
Use hwaddr for guest physical addresses except pcibus_t
for PCI addresses. In addition, ram_addr_t is a QEMU internal address
space that maps guest RAM physical addresses into an intermediate
address space that can map to host virtual address spaces. Generally
speaking, the size of guest memory can always fit into ram_addr_t but
it would not be correct to store an actual guest physical address in a
ram_addr_t.
For CPU virtual addresses there are several possible types.
vaddr is the best type to use to hold a CPU virtual address in
target-independent code. It is guaranteed to be large enough to hold a
virtual address for any target, and it does not change size from target
to target. It is always unsigned.
target_ulong is a type the size of a virtual address on the CPU; this means
it may be 32 or 64 bits depending on which target is being built. It should
therefore be used only in target-specific code, and in some
performance-critical built-per-target core code such as the TLB code.
There is also a signed version, target_long.
abi_ulong is for the ``*``-user targets, and represents a type the size of
'void ``*``' in that target's ABI. (This may not be the same as the size of a
full CPU virtual address in the case of target ABIs which use 32 bit pointers
on 64 bit CPUs, like sparc32plus.) Definitions of structures that must match
the target's ABI must use this type for anything that on the target is defined
to be an 'unsigned long' or a pointer type.
There is also a signed version, abi_long.
Of course, take all of the above with a grain of salt. If you're about
to use some system interface that requires a type like size_t, pid_t or
off_t, use matching types for any corresponding variables.
Also, if you try to use e.g., "unsigned int" as a type, and that
conflicts with the signedness of a related variable, sometimes
it's best just to use the *wrong* type, if "pulling the thread"
and fixing all related variables would be too invasive.
Finally, while using descriptive types is important, be careful not to
go overboard. If whatever you're doing causes warnings, or requires
casts, then reconsider or ask for help.
Pointers
--------
Ensure that all of your pointers are "const-correct".
Unless a pointer is used to modify the pointed-to storage,
give it the "const" attribute. That way, the reader knows
up-front that this is a read-only pointer. Perhaps more
importantly, if we're diligent about this, when you see a non-const
pointer, you're guaranteed that it is used to modify the storage
it points to, or it is aliased to another pointer that is.
Typedefs
--------
Typedefs are used to eliminate the redundant 'struct' keyword, since type
names have a different style than other identifiers ("CamelCase" versus
"snake_case"). Each named struct type should have a CamelCase name and a
corresponding typedef.
Since certain C compilers choke on duplicated typedefs, you should avoid
them and declare a typedef only in one header file. For common types,
you can use "include/qemu/typedefs.h" for example. However, as a matter
of convenience it is also perfectly fine to use forward struct
definitions instead of typedefs in headers and function prototypes; this
avoids problems with duplicated typedefs and reduces the need to include
headers from other headers.
Reserved namespaces in C and POSIX
----------------------------------
Underscore capital, double underscore, and underscore 't' suffixes should be
avoided.
Low level memory management
===========================
Use of the malloc/free/realloc/calloc/valloc/memalign/posix_memalign
APIs is not allowed in the QEMU codebase. Instead of these routines,
use the GLib memory allocation routines g_malloc/g_malloc0/g_new/
g_new0/g_realloc/g_free or QEMU's qemu_memalign/qemu_blockalign/qemu_vfree
APIs.
Please note that g_malloc will exit on allocation failure, so there
is no need to test for failure (as you would have to with malloc).
Calling g_malloc with a zero size is valid and will return NULL.
Prefer g_new(T, n) instead of g_malloc(sizeof(T) ``*`` n) for the following
reasons:
* It catches multiplication overflowing size_t;
* It returns T ``*`` instead of void ``*``, letting compiler catch more type errors.
Declarations like
.. code-block:: c
T *v = g_malloc(sizeof(*v))
are acceptable, though.
Memory allocated by qemu_memalign or qemu_blockalign must be freed with
qemu_vfree, since breaking this will cause problems on Win32.
String manipulation
===================
Do not use the strncpy function. As mentioned in the man page, it does *not*
guarantee a NULL-terminated buffer, which makes it extremely dangerous to use.
It also zeros trailing destination bytes out to the specified length. Instead,
use this similar function when possible, but note its different signature:
.. code-block:: c
void pstrcpy(char *dest, int dest_buf_size, const char *src)
Don't use strcat because it can't check for buffer overflows, but:
.. code-block:: c
char *pstrcat(char *buf, int buf_size, const char *s)
The same limitation exists with sprintf and vsprintf, so use snprintf and
vsnprintf.
QEMU provides other useful string functions:
.. code-block:: c
int strstart(const char *str, const char *val, const char **ptr)
int stristart(const char *str, const char *val, const char **ptr)
int qemu_strnlen(const char *s, int max_len)
There are also replacement character processing macros for isxyz and toxyz,
so instead of e.g. isalnum you should use qemu_isalnum.
Because of the memory management rules, you must use g_strdup/g_strndup
instead of plain strdup/strndup.
Printf-style functions
======================
Whenever you add a new printf-style function, i.e., one with a format
string argument and following "..." in its prototype, be sure to use
gcc's printf attribute directive in the prototype.
This makes it so gcc's -Wformat and -Wformat-security options can do
their jobs and cross-check format strings with the number and types
of arguments.
C standard, implementation defined and undefined behaviors
==========================================================
C code in QEMU should be written to the C99 language specification. A copy
of the final version of the C99 standard with corrigenda TC1, TC2, and TC3
included, formatted as a draft, can be downloaded from:
`<http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf>`_
The C language specification defines regions of undefined behavior and
implementation defined behavior (to give compiler authors enough leeway to
produce better code). In general, code in QEMU should follow the language
specification and avoid both undefined and implementation defined
constructs. ("It works fine on the gcc I tested it with" is not a valid
argument...) However there are a few areas where we allow ourselves to
assume certain behaviors because in practice all the platforms we care about
behave in the same way and writing strictly conformant code would be
painful. These are:
* you may assume that integers are 2s complement representation
* you may assume that right shift of a signed integer duplicates
the sign bit (ie it is an arithmetic shift, not a logical shift)
In addition, QEMU assumes that the compiler does not use the latitude
given in C99 and C11 to treat aspects of signed '<<' as undefined, as
documented in the GNU Compiler Collection manual starting at version 4.0.
Automatic memory deallocation
=============================
QEMU has a mandatory dependency either the GCC or CLang compiler. As
such it has the freedom to make use of a C language extension for
automatically running a cleanup function when a stack variable goes
out of scope. This can be used to simplify function cleanup paths,
often allowing many goto jumps to be eliminated, through automatic
free'ing of memory.
The GLib2 library provides a number of functions/macros for enabling
automatic cleanup:
`<https://developer.gnome.org/glib/stable/glib-Miscellaneous-Macros.html>`_
Most notably:
* g_autofree - will invoke g_free() on the variable going out of scope
* g_autoptr - for structs / objects, will invoke the cleanup func created
by a previous use of G_DEFINE_AUTOPTR_CLEANUP_FUNC. This is
supported for most GLib data types and GObjects
For example, instead of
.. code-block:: c
int somefunc(void) {
int ret = -1;
char *foo = g_strdup_printf("foo%", "wibble");
GList *bar = .....
if (eek) {
goto cleanup;
}
ret = 0;
cleanup:
g_free(foo);
g_list_free(bar);
return ret;
}
Using g_autofree/g_autoptr enables the code to be written as:
.. code-block:: c
int somefunc(void) {
g_autofree char *foo = g_strdup_printf("foo%", "wibble");
g_autoptr (GList) bar = .....
if (eek) {
return -1;
}
return 0;
}
While this generally results in simpler, less leak-prone code, there
are still some caveats to beware of
* Variables declared with g_auto* MUST always be initialized,
otherwise the cleanup function will use uninitialized stack memory
* If a variable declared with g_auto* holds a value which must
live beyond the life of the function, that value must be saved
and the original variable NULL'd out. This can be simpler using
g_steal_pointer
.. code-block:: c
char *somefunc(void) {
g_autofree char *foo = g_strdup_printf("foo%", "wibble");
g_autoptr (GList) bar = .....
if (eek) {
return NULL;
}
return g_steal_pointer(&foo);
}
QEMU Specific Idioms
********************
Error handling and reporting
============================
Reporting errors to the human user
----------------------------------
Do not use printf(), fprintf() or monitor_printf(). Instead, use
error_report() or error_vreport() from error-report.h. This ensures the
error is reported in the right place (current monitor or stderr), and in
a uniform format.
Use error_printf() & friends to print additional information.
error_report() prints the current location. In certain common cases
like command line parsing, the current location is tracked
automatically. To manipulate it manually, use the loc_``*``() from
error-report.h.
Propagating errors
------------------
An error can't always be reported to the user right where it's detected,
but often needs to be propagated up the call chain to a place that can
handle it. This can be done in various ways.
The most flexible one is Error objects. See error.h for usage
information.
Use the simplest suitable method to communicate success / failure to
callers. Stick to common methods: non-negative on success / -1 on
error, non-negative / -errno, non-null / null, or Error objects.
Example: when a function returns a non-null pointer on success, and it
can fail only in one way (as far as the caller is concerned), returning
null on failure is just fine, and certainly simpler and a lot easier on
the eyes than propagating an Error object through an Error ``*````*`` parameter.
Example: when a function's callers need to report details on failure
only the function really knows, use Error ``*````*``, and set suitable errors.
Do not report an error to the user when you're also returning an error
for somebody else to handle. Leave the reporting to the place that
consumes the error returned.
Handling errors
---------------
Calling exit() is fine when handling configuration errors during
startup. It's problematic during normal operation. In particular,
monitor commands should never exit().
Do not call exit() or abort() to handle an error that can be triggered
by the guest (e.g., some unimplemented corner case in guest code
translation or device emulation). Guests should not be able to
terminate QEMU.
Note that &error_fatal is just another way to exit(1), and &error_abort
is just another way to abort().
trace-events style
==================
0x prefix
---------
In trace-events files, use a '0x' prefix to specify hex numbers, as in:
.. code-block::
some_trace(unsigned x, uint64_t y) "x 0x%x y 0x" PRIx64
An exception is made for groups of numbers that are hexadecimal by
convention and separated by the symbols '.', '/', ':', or ' ' (such as
PCI bus id):
.. code-block::
another_trace(int cssid, int ssid, int dev_num) "bus id: %x.%x.%04x"
However, you can use '0x' for such groups if you want. Anyway, be sure that
it is obvious that numbers are in hex, ex.:
.. code-block::
data_dump(uint8_t c1, uint8_t c2, uint8_t c3) "bytes (in hex): %02x %02x %02x"
Rationale: hex numbers are hard to read in logs when there is no 0x prefix,
especially when (occasionally) the representation doesn't contain any letters
and especially in one line with other decimal numbers. Number groups are allowed
to not use '0x' because for some things notations like %x.%x.%x are used not
only in Qemu. Also dumping raw data bytes with '0x' is less readable.
'#' printf flag
---------------
Do not use printf flag '#', like '%#x'.
Rationale: there are two ways to add a '0x' prefix to printed number: '0x%...'
and '%#...'. For consistency the only one way should be used. Arguments for
'0x%' are:
* it is more popular
* '%#' omits the 0x for the value 0 which makes output inconsistent

View File

@ -1,8 +1,8 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
@ -10,7 +10,7 @@
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
Preamble
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
@ -112,7 +112,7 @@ modification follow. Pay close attention to the difference between a
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library or other
@ -146,7 +146,7 @@ such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
@ -432,7 +432,7 @@ decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
@ -455,7 +455,7 @@ FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Libraries
@ -476,7 +476,7 @@ convey the exclusion of warranty; and each file should have at least the
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2 of the License, or (at your option) any later version.
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
@ -485,7 +485,7 @@ convey the exclusion of warranty; and each file should have at least the
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Also add information on how to contact you by electronic and paper mail.
@ -500,5 +500,3 @@ necessary. Here is a sample; alter the names:
Ty Coon, President of Vice
That's all there is to it!

View File

@ -1,159 +0,0 @@
1. Preprocessor
For variadic macros, stick with this C99-like syntax:
#define DPRINTF(fmt, ...) \
do { printf("IRQ: " fmt, ## __VA_ARGS__); } while (0)
2. C types
It should be common sense to use the right type, but we have collected
a few useful guidelines here.
2.1. Scalars
If you're using "int" or "long", odds are good that there's a better type.
If a variable is counting something, it should be declared with an
unsigned type.
If it's host memory-size related, size_t should be a good choice (use
ssize_t only if required). Guest RAM memory offsets must use ram_addr_t,
but only for RAM, it may not cover whole guest address space.
If it's file-size related, use off_t.
If it's file-offset related (i.e., signed), use off_t.
If it's just counting small numbers use "unsigned int";
(on all but oddball embedded systems, you can assume that that
type is at least four bytes wide).
In the event that you require a specific width, use a standard type
like int32_t, uint32_t, uint64_t, etc. The specific types are
mandatory for VMState fields.
Don't use Linux kernel internal types like u32, __u32 or __le32.
Use hwaddr for guest physical addresses except pcibus_t
for PCI addresses. In addition, ram_addr_t is a QEMU internal address
space that maps guest RAM physical addresses into an intermediate
address space that can map to host virtual address spaces. Generally
speaking, the size of guest memory can always fit into ram_addr_t but
it would not be correct to store an actual guest physical address in a
ram_addr_t.
For CPU virtual addresses there are several possible types.
vaddr is the best type to use to hold a CPU virtual address in
target-independent code. It is guaranteed to be large enough to hold a
virtual address for any target, and it does not change size from target
to target. It is always unsigned.
target_ulong is a type the size of a virtual address on the CPU; this means
it may be 32 or 64 bits depending on which target is being built. It should
therefore be used only in target-specific code, and in some
performance-critical built-per-target core code such as the TLB code.
There is also a signed version, target_long.
abi_ulong is for the *-user targets, and represents a type the size of
'void *' in that target's ABI. (This may not be the same as the size of a
full CPU virtual address in the case of target ABIs which use 32 bit pointers
on 64 bit CPUs, like sparc32plus.) Definitions of structures that must match
the target's ABI must use this type for anything that on the target is defined
to be an 'unsigned long' or a pointer type.
There is also a signed version, abi_long.
Of course, take all of the above with a grain of salt. If you're about
to use some system interface that requires a type like size_t, pid_t or
off_t, use matching types for any corresponding variables.
Also, if you try to use e.g., "unsigned int" as a type, and that
conflicts with the signedness of a related variable, sometimes
it's best just to use the *wrong* type, if "pulling the thread"
and fixing all related variables would be too invasive.
Finally, while using descriptive types is important, be careful not to
go overboard. If whatever you're doing causes warnings, or requires
casts, then reconsider or ask for help.
2.2. Pointers
Ensure that all of your pointers are "const-correct".
Unless a pointer is used to modify the pointed-to storage,
give it the "const" attribute. That way, the reader knows
up-front that this is a read-only pointer. Perhaps more
importantly, if we're diligent about this, when you see a non-const
pointer, you're guaranteed that it is used to modify the storage
it points to, or it is aliased to another pointer that is.
2.3. Typedefs
Typedefs are used to eliminate the redundant 'struct' keyword.
2.4. Reserved namespaces in C and POSIX
Underscore capital, double underscore, and underscore 't' suffixes should be
avoided.
3. Low level memory management
Use of the malloc/free/realloc/calloc/valloc/memalign/posix_memalign
APIs is not allowed in the QEMU codebase. Instead of these routines,
use the GLib memory allocation routines g_malloc/g_malloc0/g_new/
g_new0/g_realloc/g_free or QEMU's qemu_memalign/qemu_blockalign/qemu_vfree
APIs.
Please note that g_malloc will exit on allocation failure, so there
is no need to test for failure (as you would have to with malloc).
Calling g_malloc with a zero size is valid and will return NULL.
Memory allocated by qemu_memalign or qemu_blockalign must be freed with
qemu_vfree, since breaking this will cause problems on Win32.
4. String manipulation
Do not use the strncpy function. As mentioned in the man page, it does *not*
guarantee a NULL-terminated buffer, which makes it extremely dangerous to use.
It also zeros trailing destination bytes out to the specified length. Instead,
use this similar function when possible, but note its different signature:
void pstrcpy(char *dest, int dest_buf_size, const char *src)
Don't use strcat because it can't check for buffer overflows, but:
char *pstrcat(char *buf, int buf_size, const char *s)
The same limitation exists with sprintf and vsprintf, so use snprintf and
vsnprintf.
QEMU provides other useful string functions:
int strstart(const char *str, const char *val, const char **ptr)
int stristart(const char *str, const char *val, const char **ptr)
int qemu_strnlen(const char *s, int max_len)
There are also replacement character processing macros for isxyz and toxyz,
so instead of e.g. isalnum you should use qemu_isalnum.
Because of the memory management rules, you must use g_strdup/g_strndup
instead of plain strdup/strndup.
5. Printf-style functions
Whenever you add a new printf-style function, i.e., one with a format
string argument and following "..." in its prototype, be sure to use
gcc's printf attribute directive in the prototype.
This makes it so gcc's -Wformat and -Wformat-security options can do
their jobs and cross-check format strings with the number and types
of arguments.
6. C standard, implementation defined and undefined behaviors
C code in QEMU should be written to the C99 language specification. A copy
of the final version of the C99 standard with corrigenda TC1, TC2, and TC3
included, formatted as a draft, can be downloaded from:
http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf
The C language specification defines regions of undefined behavior and
implementation defined behavior (to give compiler authors enough leeway to
produce better code). In general, code in QEMU should follow the language
specification and avoid both undefined and implementation defined
constructs. ("It works fine on the gcc I tested it with" is not a valid
argument...) However there are a few areas where we allow ourselves to
assume certain behaviors because in practice all the platforms we care about
behave in the same way and writing strictly conformant code would be
painful. These are:
* you may assume that integers are 2s complement representation
* you may assume that right shift of a signed integer duplicates
the sign bit (ie it is an arithmetic shift, not a logical shift)

View File

@ -1,20 +1,26 @@
The following points clarify the QEMU license:
The QEMU distribution includes both the QEMU emulator and
various firmware files. These are separate programs that are
distributed together for our users' convenience, and they have
separate licenses.
1) QEMU as a whole is released under the GNU General Public License,
version 2.
The following points clarify the license of the QEMU emulator:
2) Parts of QEMU have specific licenses which are compatible with the
GNU General Public License, version 2. Hence each source file contains
its own licensing information. Source files with no licensing information
are released under the GNU General Public License, version 2 or (at your
option) any later version.
1) The QEMU emulator as a whole is released under the GNU General
Public License, version 2.
2) Parts of the QEMU emulator have specific licenses which are compatible
with the GNU General Public License, version 2. Hence each source file
contains its own licensing information. Source files with no licensing
information are released under the GNU General Public License, version
2 or (at your option) any later version.
As of July 2013, contributions under version 2 of the GNU General Public
License (and no later version) are only accepted for the following files
or directories: bsd-user/, linux-user/, hw/misc/vfio.c, hw/xen/xen_pt*.
or directories: bsd-user/, linux-user/, hw/vfio/, hw/xen/xen_pt*.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).
3) The Tiny Code Generator (TCG) is mostly under the BSD or MIT licenses;
but some parts may be GPLv2 or other licenses. Again, see the
specific licensing information in each source file.
4) QEMU is a trademark of Fabrice Bellard.

2916
qemu/MAINTAINERS Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,115 +0,0 @@
# Makefile for QEMU - modified for Unicorn engine.
# Always point to the root of the build tree (needs GNU make).
BUILD_DIR=$(CURDIR)
# All following code might depend on configuration variables
ifneq ($(wildcard config-host.mak),)
# Put the all: rule here so that config-host.mak can contain dependencies.
all:
include config-host.mak
# Check that we're not trying to do an out-of-tree build from
# a tree that's been used for an in-tree build.
ifneq ($(realpath $(SRC_PATH)),$(realpath .))
ifneq ($(wildcard $(SRC_PATH)/config-host.mak),)
$(error This is an out of tree build but your source tree ($(SRC_PATH)) \
seems to have been used for an in-tree build. You can fix this by running \
"make distclean && rm -rf *-linux-user *-softmmu" in your source tree)
endif
endif
CONFIG_SOFTMMU := $(if $(filter %-softmmu,$(TARGET_DIRS)),y)
-include config-all-devices.mak
include $(SRC_PATH)/rules.mak
config-host.mak: $(SRC_PATH)/configure
@echo $@ is out-of-date, running configure
else
config-host.mak:
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
@echo "Please call configure before running make!"
@exit 1
endif
endif
GENERATED_HEADERS = config-host.h
# Don't try to regenerate Makefile or configure
# We don't generate any of them
Makefile: ;
configure: ;
.PHONY: all clean distclean recurse-all
$(call set-vpath, $(SRC_PATH))
SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory) BUILD_DIR=$(BUILD_DIR)
SUBDIR_DEVICES_MAK=$(patsubst %, %/config-devices.mak, $(TARGET_DIRS))
SUBDIR_DEVICES_MAK_DEP=$(patsubst %, %-config-devices.mak.d, $(TARGET_DIRS))
ifeq ($(SUBDIR_DEVICES_MAK),)
config-all-devices.mak:
$(call quiet-command,echo '# no devices' > $@," GEN $@")
else
config-all-devices.mak: $(SUBDIR_DEVICES_MAK)
$(call quiet-command, sed -n \
's|^\([^=]*\)=\(.*\)$$|\1:=$$(findstring y,$$(\1)\2)|p' \
$(SUBDIR_DEVICES_MAK) | sort -u > $@, \
" GEN $@")
endif
-include $(SUBDIR_DEVICES_MAK_DEP)
%/config-devices.mak: default-configs/%.mak
$(call quiet-command, cp $< $@, " GEN $@")
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
dummy := $(call unnest-vars,,util-obj-y common-obj-y)
all: recurse-all
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
SOFTMMU_SUBDIR_RULES=$(filter %-softmmu,$(SUBDIR_RULES))
$(SOFTMMU_SUBDIR_RULES): config-all-devices.mak
subdir-%:
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $* V="$(V)" TARGET_DIR="$*/" all,)
$(SUBDIR_RULES): qapi-types.c qapi-types.h qapi-visit.c qapi-visit.h $(common-obj-y) $(util-obj-y)
recurse-all: $(SUBDIR_RULES)
######################################################################
clean:
find . \( -name '*.l[oa]' -o -name '*.so' -o -name '*.dll' -o -name '*.mo' -o -name '*.[oda]' \) -type f -exec rm {} +
rm -f TAGS *~ */*~
@# May not be present in GENERATED_HEADERS
rm -f $(foreach f,$(GENERATED_HEADERS),$(f) $(f)-timestamp)
for d in $(TARGET_DIRS); do \
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
done
distclean: clean
rm -f config-host.mak config-host.h*
rm -f config-all-devices.mak
rm -f config.log config.status
for d in $(TARGET_DIRS); do \
rm -rf $$d || exit 1 ; \
done
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
Makefile: $(GENERATED_HEADERS)
endif

View File

@ -1,12 +0,0 @@
#######################################################################
# Common libraries for tools and emulators
util-obj-y = util/ qobject/ qapi/ qapi-types.o qapi-visit.o
common-obj-y += hw/
common-obj-y += accel.o
common-obj-y += vl.o qemu-timer.o
common-obj-y += ../uc.o ../list.o glib_compat.o
common-obj-y += qemu-log.o
common-obj-y += tcg-runtime.o
common-obj-y += hw/
common-obj-y += qom/

View File

@ -1,84 +0,0 @@
# -*- Mode: makefile -*-
include ../config-host.mak
include config-target.mak
include config-devices.mak
include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH))
QEMU_CFLAGS += -I.. -I$(SRC_PATH)/target-$(TARGET_BASE_ARCH) -DNEED_CPU_H
QEMU_CFLAGS+=-I$(SRC_PATH)/include
# system emulator name
QEMU_PROG=qemu-system-$(TARGET_NAME)$(EXESUF)
config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
all: $(QEMU_PROG)
#########################################################
# cpu emulator library
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += tcg/tcg.o tcg/optimize.o
obj-y += fpu/softfloat.o
obj-y += target-$(TARGET_BASE_ARCH)/
#########################################################
# System emulator target
obj-y += cpus.o ioport.o
obj-y += hw/
obj-y += memory.o cputlb.o
obj-y += memory_mapping.o
# Hardware support
ifeq ($(TARGET_NAME), sparc64)
obj-y += hw/sparc64/
else
obj-y += hw/$(TARGET_BASE_ARCH)/
endif
# Workaround for http://gcc.gnu.org/PR55489, see configure.
%/translate.o: QEMU_CFLAGS += $(TRANSLATE_OPT_CFLAGS)
dummy := $(call unnest-vars,,obj-y)
all-obj-y := $(obj-y)
common-obj-y :=
include $(SRC_PATH)/Makefile.objs
dummy := $(call unnest-vars,..,util-obj-y)
target-obj-y-save := $(target-obj-y) $(util-obj-y)
dummy := $(call unnest-vars,..,common-obj-y)
target-obj-y := $(target-obj-y-save)
all-obj-y += $(common-obj-y)
all-obj-y += $(target-obj-y)
# determine shared lib extension
IS_APPLE := $(shell $(CC) -dM -E - < /dev/null | grep __apple_build_version__ | wc -l | tr -d " ")
ifeq ($(IS_APPLE),1)
EXT = dylib
else
# Cygwin?
IS_CYGWIN := $(shell $(CC) -dumpmachine | grep -i cygwin | wc -l)
ifeq ($(IS_CYGWIN),1)
EXT = dll
else
EXT = so
endif
endif
# build either PROG or PROGW
$(QEMU_PROG): $(all-obj-y)
clean:
rm -f *.a *~ $(QEMU_PROG)
rm -f $(shell find . -name '*.[od]')
GENERATED_HEADERS += config-target.h
Makefile: $(GENERATED_HEADERS)

View File

@ -1 +1 @@
2.2.1
5.0.1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,130 +0,0 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/* Modified for Unicorn Engine by Nguyen Anh Quynh, 2015 */
#include "sysemu/accel.h"
#include "hw/boards.h"
#include "qemu-common.h"
#include "sysemu/sysemu.h"
#include "qom/object.h"
#include "hw/boards.h"
// use default size for TCG translated block
#define TCG_TB_SIZE 0
static bool tcg_allowed = true;
static int tcg_init(MachineState *ms);
static AccelClass *accel_find(struct uc_struct *uc, const char *opt_name);
static int accel_init_machine(AccelClass *acc, MachineState *ms);
static void tcg_accel_class_init(struct uc_struct *uc, ObjectClass *oc, void *data);
static int tcg_init(MachineState *ms)
{
ms->uc->tcg_exec_init(ms->uc, TCG_TB_SIZE * 1024 * 1024); // arch-dependent
return 0;
}
static const TypeInfo accel_type = {
TYPE_ACCEL,
TYPE_OBJECT,
sizeof(AccelClass),
sizeof(AccelState),
};
#define TYPE_TCG_ACCEL ACCEL_CLASS_NAME("tcg")
static const TypeInfo tcg_accel_type = {
TYPE_TCG_ACCEL,
TYPE_ACCEL,
0,
0,
NULL,
NULL,
NULL,
NULL,
NULL,
tcg_accel_class_init,
};
int configure_accelerator(MachineState *ms)
{
int ret;
bool accel_initialised = false;
AccelClass *acc;
acc = accel_find(ms->uc, "tcg");
ret = accel_init_machine(acc, ms);
if (ret < 0) {
fprintf(stderr, "failed to initialize %s: %s\n",
acc->name,
strerror(-ret));
} else {
accel_initialised = true;
}
return !accel_initialised;
}
void register_accel_types(struct uc_struct *uc)
{
type_register_static(uc, &accel_type);
type_register_static(uc, &tcg_accel_type);
}
static void tcg_accel_class_init(struct uc_struct *uc, ObjectClass *oc, void *data)
{
AccelClass *ac = ACCEL_CLASS(uc, oc);
ac->name = "tcg";
ac->init_machine = tcg_init;
ac->allowed = &tcg_allowed;
}
/* Lookup AccelClass from opt_name. Returns NULL if not found */
static AccelClass *accel_find(struct uc_struct *uc, const char *opt_name)
{
char *class_name = g_strdup_printf(ACCEL_CLASS_NAME("%s"), opt_name);
AccelClass *ac = ACCEL_CLASS(uc, object_class_by_name(uc, class_name));
g_free(class_name);
return ac;
}
static int accel_init_machine(AccelClass *acc, MachineState *ms)
{
ObjectClass *oc = OBJECT_CLASS(acc);
const char *cname = object_class_get_name(oc);
AccelState *accel = ACCEL(ms->uc, object_new(ms->uc, cname));
int ret;
ms->accelerator = accel;
*(acc->allowed) = true;
ret = acc->init_machine(ms);
if (ret < 0) {
ms->accelerator = NULL;
*(acc->allowed) = false;
object_unref(ms->uc, OBJECT(accel));
}
return ret;
}

View File

@ -0,0 +1,358 @@
/*
* Atomic helper templates
* Included from tcg-runtime.c and cputlb.c.
*
* Copyright (c) 2016 Red Hat, Inc
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#if DATA_SIZE == 16
# define SUFFIX o
# define DATA_TYPE Int128
# define BSWAP bswap128
# define SHIFT 4
#elif DATA_SIZE == 8
# define SUFFIX q
# define DATA_TYPE uint64_t
# define SDATA_TYPE int64_t
# define BSWAP bswap64
# define SHIFT 3
#elif DATA_SIZE == 4
# define SUFFIX l
# define DATA_TYPE uint32_t
# define SDATA_TYPE int32_t
# define BSWAP bswap32
# define SHIFT 2
#elif DATA_SIZE == 2
# define SUFFIX w
# define DATA_TYPE uint16_t
# define SDATA_TYPE int16_t
# define BSWAP bswap16
# define SHIFT 1
#elif DATA_SIZE == 1
# define SUFFIX b
# define DATA_TYPE uint8_t
# define SDATA_TYPE int8_t
# define BSWAP
# define SHIFT 0
#else
# error unsupported data size
#endif
#if DATA_SIZE >= 4
# define ABI_TYPE DATA_TYPE
#else
# define ABI_TYPE uint32_t
#endif
/* Define host-endian atomic operations. Note that END is used within
the ATOMIC_NAME macro, and redefined below. */
#if DATA_SIZE == 1
# define END
#elif defined(HOST_WORDS_BIGENDIAN)
# define END _be
#else
# define END _le
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret;
#if DATA_SIZE == 16
ret = atomic16_cmpxchg(haddr, cmpv, newv);
#else
#ifdef _MSC_VER
ret = atomic_cmpxchg__nocheck((long *)haddr, cmpv, newv);
#else
ret = atomic_cmpxchg__nocheck(haddr, cmpv, newv);
#endif
#endif
ATOMIC_MMU_CLEANUP;
return ret;
}
#if DATA_SIZE >= 16
#if HAVE_ATOMIC128
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
val = atomic16_read(haddr);
ATOMIC_MMU_CLEANUP;
return val;
}
void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
atomic16_set(haddr, val);
ATOMIC_MMU_CLEANUP;
}
#endif
#else
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret;
ret = *haddr;
*haddr = val;
ATOMIC_MMU_CLEANUP;
return ret;
}
#ifdef _MSC_VER
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
DATA_TYPE ret; \
ret = atomic_##X((long *)haddr, val); \
ATOMIC_MMU_CLEANUP; \
return ret; \
}
#else
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
DATA_TYPE ret; \
ret = atomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \
return ret; \
}
#endif
GEN_ATOMIC_HELPER(fetch_add)
GEN_ATOMIC_HELPER(fetch_and)
GEN_ATOMIC_HELPER(fetch_or)
GEN_ATOMIC_HELPER(fetch_xor)
GEN_ATOMIC_HELPER(add_fetch)
GEN_ATOMIC_HELPER(and_fetch)
GEN_ATOMIC_HELPER(or_fetch)
GEN_ATOMIC_HELPER(xor_fetch)
#undef GEN_ATOMIC_HELPER
/* These helpers are, as a whole, full barriers. Within the helper,
* the leading barrier is explicit and the trailing barrier is within
* cmpxchg primitive.
*
* Trace this load + RMW loop as a single RMW op. This way, regardless
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
XDATA_TYPE cmp, old, new, val = xval; \
cmp = *haddr; \
do { \
old = cmp; new = FN(old, val); \
cmp = *haddr; \
if (cmp == old) \
*haddr = new; \
} while (cmp != old); \
ATOMIC_MMU_CLEANUP; \
return RET; \
}
GEN_ATOMIC_HELPER_FN(fetch_smin, MIN, SDATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_umin, MIN, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_smax, MAX, SDATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_umax, MAX, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(smin_fetch, MIN, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umin_fetch, MIN, DATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(smax_fetch, MAX, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
#undef GEN_ATOMIC_HELPER_FN
#endif /* DATA SIZE >= 16 */
#undef END
#if DATA_SIZE > 1
/* Define reverse-host-endian atomic operations. Note that END is used
within the ATOMIC_NAME macro. */
#ifdef HOST_WORDS_BIGENDIAN
# define END _le
#else
# define END _be
#endif
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
DATA_TYPE ret;
#if DATA_SIZE == 16
ret = atomic16_cmpxchg(haddr, BSWAP(cmpv), BSWAP(newv));
#else
#ifdef _MSC_VER
ret = atomic_cmpxchg__nocheck((long *)haddr, BSWAP(cmpv), BSWAP(newv));
#else
ret = atomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv));
#endif
#endif
ATOMIC_MMU_CLEANUP;
return BSWAP(ret);
}
#if DATA_SIZE >= 16
#if HAVE_ATOMIC128
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
val = atomic16_read(haddr);
ATOMIC_MMU_CLEANUP;
return BSWAP(val);
}
void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
val = BSWAP(val);
val = BSWAP(val);
atomic16_set(haddr, val);
ATOMIC_MMU_CLEANUP;
}
#endif
#else
ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
ABI_TYPE val EXTRA_ARGS)
{
ATOMIC_MMU_DECLS;
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
ABI_TYPE ret;
ret = *haddr;
*haddr = BSWAP(val);
ATOMIC_MMU_CLEANUP;
return BSWAP(ret);
}
#ifdef _MSC_VER
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
DATA_TYPE ret; \
ret = atomic_##X((long *)haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
return BSWAP(ret); \
}
#else
#define GEN_ATOMIC_HELPER(X) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE val EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
DATA_TYPE ret; \
ret = atomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
return BSWAP(ret); \
}
#endif
GEN_ATOMIC_HELPER(fetch_and)
GEN_ATOMIC_HELPER(fetch_or)
GEN_ATOMIC_HELPER(fetch_xor)
GEN_ATOMIC_HELPER(and_fetch)
GEN_ATOMIC_HELPER(or_fetch)
GEN_ATOMIC_HELPER(xor_fetch)
#undef GEN_ATOMIC_HELPER
/* These helpers are, as a whole, full barriers. Within the helper,
* the leading barrier is explicit and the trailing barrier is within
* cmpxchg primitive.
*
* Trace this load + RMW loop as a single RMW op. This way, regardless
* of CF_PARALLEL's value, we'll trace just a read and a write.
*/
#define GEN_ATOMIC_HELPER_FN(X, FN, XDATA_TYPE, RET) \
ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
ABI_TYPE xval EXTRA_ARGS) \
{ \
ATOMIC_MMU_DECLS; \
XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
XDATA_TYPE ldo, ldn, old, new, val = xval; \
ldn = *haddr; \
do { \
ldo = ldn; old = BSWAP(ldo); new = FN(old, val); \
ldn = *haddr; \
if (ldn == ldo) \
*haddr = BSWAP(new); \
} while (ldo != ldn); \
ATOMIC_MMU_CLEANUP; \
return RET; \
}
GEN_ATOMIC_HELPER_FN(fetch_smin, MIN, SDATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_umin, MIN, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_smax, MAX, SDATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(fetch_umax, MAX, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(smin_fetch, MIN, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umin_fetch, MIN, DATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(smax_fetch, MAX, SDATA_TYPE, new)
GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
/* Note that for addition, we need to use a separate cmpxchg loop instead
of bswaps for the reverse-host-endian helpers. */
#define ADD(X, Y) (X + Y)
GEN_ATOMIC_HELPER_FN(fetch_add, ADD, DATA_TYPE, old)
GEN_ATOMIC_HELPER_FN(add_fetch, ADD, DATA_TYPE, new)
#undef ADD
#undef GEN_ATOMIC_HELPER_FN
#endif /* DATA_SIZE >= 16 */
#undef END
#endif /* DATA_SIZE > 1 */
#undef BSWAP
#undef ABI_TYPE
#undef DATA_TYPE
#undef SDATA_TYPE
#undef SUFFIX
#undef DATA_SIZE
#undef SHIFT

View File

@ -0,0 +1,58 @@
/*
* emulator main execution loop
*
* Copyright (c) 2003-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "sysemu/cpus.h"
#include "sysemu/tcg.h"
#include "exec/exec-all.h"
/* exit the current TB, but without causing any exception to be raised */
void cpu_loop_exit_noexc(CPUState *cpu)
{
cpu->exception_index = -1;
cpu_loop_exit(cpu);
}
void cpu_reloading_memory_map(void)
{
}
void cpu_loop_exit(CPUState *cpu)
{
/* Unlock JIT write protect if applicable. */
tb_exec_unlock(cpu->uc->tcg_ctx);
/* Undo the setting in cpu_tb_exec. */
cpu->can_do_io = 1;
siglongjmp(cpu->uc->jmp_bufs[cpu->uc->nested_level - 1], 1);
}
void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc)
{
if (pc) {
cpu_restore_state(cpu, pc, true);
}
cpu_loop_exit(cpu);
}
void cpu_loop_exit_atomic(CPUState *cpu, uintptr_t pc)
{
cpu->exception_index = EXCP_ATOMIC;
cpu_loop_exit_restore(cpu, pc);
}

611
qemu/accel/tcg/cpu-exec.c Normal file
View File

@ -0,0 +1,611 @@
/*
* emulator main execution loop
*
* Copyright (c) 2003-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#include "hw/core/cpu.h"
#include "exec/exec-all.h"
#include "tcg/tcg.h"
#include "qemu/atomic.h"
#include "qemu/timer.h"
#include "exec/tb-hash.h"
#include "exec/tb-lookup.h"
#include "sysemu/cpus.h"
#include "uc_priv.h"
/* -icount align implementation. */
typedef struct SyncClocks {
int64_t diff_clk;
int64_t last_cpu_icount;
int64_t realtime_clock;
} SyncClocks;
/* Allow the guest to have a max 3ms advance.
* The difference between the 2 clocks could therefore
* oscillate around 0.
*/
#define VM_CLOCK_ADVANCE 3000000
#define THRESHOLD_REDUCE 1.5
#define MAX_DELAY_PRINT_RATE 2000000000LL
#define MAX_NB_PRINTS 100
/* Execute a TB, and fix up the CPU state afterwards if necessary */
static inline tcg_target_ulong cpu_tb_exec(CPUState *cpu, TranslationBlock *itb)
{
CPUArchState *env = cpu->env_ptr;
uintptr_t ret;
TranslationBlock *last_tb;
int tb_exit;
uint8_t *tb_ptr = itb->tc.ptr;
UC_TRACE_START(UC_TRACE_TB_EXEC);
tb_exec_lock(cpu->uc->tcg_ctx);
ret = tcg_qemu_tb_exec(env, tb_ptr);
tb_exec_unlock(cpu->uc->tcg_ctx);
UC_TRACE_END(UC_TRACE_TB_EXEC, "[uc] exec tb 0x%" PRIx64 ": ", itb->pc);
cpu->can_do_io = 1;
last_tb = (TranslationBlock *)(ret & ~TB_EXIT_MASK);
tb_exit = ret & TB_EXIT_MASK;
// trace_exec_tb_exit(last_tb, tb_exit);
if (tb_exit > TB_EXIT_IDX1) {
/* We didn't start executing this TB (eg because the instruction
* counter hit zero); we must restore the guest PC to the address
* of the start of the TB.
*/
CPUClass *cc = CPU_GET_CLASS(cpu);
if (!HOOK_EXISTS(env->uc, UC_HOOK_CODE) && !env->uc->timeout) {
// We should sync pc for R/W error.
switch (env->uc->invalid_error) {
case UC_ERR_WRITE_PROT:
case UC_ERR_READ_PROT:
case UC_ERR_FETCH_PROT:
case UC_ERR_WRITE_UNMAPPED:
case UC_ERR_READ_UNMAPPED:
case UC_ERR_FETCH_UNMAPPED:
case UC_ERR_WRITE_UNALIGNED:
case UC_ERR_READ_UNALIGNED:
case UC_ERR_FETCH_UNALIGNED:
break;
default:
if (cc->synchronize_from_tb) {
// avoid sync twice when helper_uc_tracecode() already did this.
if (env->uc->emu_counter <= env->uc->emu_count &&
!env->uc->stop_request && !env->uc->quit_request)
cc->synchronize_from_tb(cpu, last_tb);
} else {
assert(cc->set_pc);
cc->set_pc(cpu, last_tb->pc);
}
}
}
cpu->tcg_exit_req = 0;
}
return ret;
}
/* Execute the code without caching the generated code. An interpreter
could be used if available. */
static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
TranslationBlock *orig_tb, bool ignore_icount)
{
TranslationBlock *tb;
uint32_t cflags = curr_cflags() | CF_NOCACHE;
if (ignore_icount) {
cflags &= ~CF_USE_ICOUNT;
}
/* Should never happen.
We only end up here when an existing TB is too long. */
cflags |= MIN(max_cycles, CF_COUNT_MASK);
mmap_lock();
tb = tb_gen_code(cpu, orig_tb->pc, orig_tb->cs_base,
orig_tb->flags, cflags);
tb->orig_tb = orig_tb;
mmap_unlock();
/* execute the generated code */
cpu_tb_exec(cpu, tb);
mmap_lock();
tb_phys_invalidate(cpu->uc->tcg_ctx, tb, -1);
mmap_unlock();
tcg_tb_remove(cpu->uc->tcg_ctx, tb);
}
struct tb_desc {
target_ulong pc;
target_ulong cs_base;
CPUArchState *env;
tb_page_addr_t phys_page1;
uint32_t flags;
uint32_t cf_mask;
uint32_t trace_vcpu_dstate;
};
static bool tb_lookup_cmp(struct uc_struct *uc, const void *p, const void *d)
{
const TranslationBlock *tb = p;
const struct tb_desc *desc = d;
if (tb->pc == desc->pc &&
tb->page_addr[0] == desc->phys_page1 &&
tb->cs_base == desc->cs_base &&
tb->flags == desc->flags &&
tb->trace_vcpu_dstate == desc->trace_vcpu_dstate &&
(tb_cflags(tb) & (CF_HASH_MASK | CF_INVALID)) == desc->cf_mask) {
/* check next page if needed */
if (tb->page_addr[1] == -1) {
return true;
} else {
tb_page_addr_t phys_page2;
target_ulong virt_page2;
virt_page2 = (desc->pc & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
phys_page2 = get_page_addr_code(desc->env, virt_page2);
if (tb->page_addr[1] == phys_page2) {
return true;
}
}
}
return false;
}
TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
target_ulong cs_base, uint32_t flags,
uint32_t cf_mask)
{
struct uc_struct *uc = cpu->uc;
tb_page_addr_t phys_pc;
struct tb_desc desc;
uint32_t h;
desc.env = (CPUArchState *)cpu->env_ptr;
desc.cs_base = cs_base;
desc.flags = flags;
desc.cf_mask = cf_mask;
desc.trace_vcpu_dstate = *cpu->trace_dstate;
desc.pc = pc;
phys_pc = get_page_addr_code(desc.env, pc);
if (phys_pc == -1) {
return NULL;
}
desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
h = tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate);
return qht_lookup_custom(uc, &cpu->uc->tcg_ctx->tb_ctx.htable, &desc, h, tb_lookup_cmp);
}
void tb_set_jmp_target(TranslationBlock *tb, int n, uintptr_t addr)
{
if (TCG_TARGET_HAS_direct_jump) {
uintptr_t offset = tb->jmp_target_arg[n];
uintptr_t tc_ptr = (uintptr_t)tb->tc.ptr;
tb_target_set_jmp_target(tc_ptr, tc_ptr + offset, addr);
} else {
tb->jmp_target_arg[n] = addr;
}
}
static inline void tb_add_jump(TranslationBlock *tb, int n,
TranslationBlock *tb_next)
{
uintptr_t old;
assert(n < ARRAY_SIZE(tb->jmp_list_next));
/* make sure the destination TB is valid */
if (tb_next->cflags & CF_INVALID) {
goto out_unlock_next;
}
/* Atomically claim the jump destination slot only if it was NULL */
#ifdef _MSC_VER
old = atomic_cmpxchg((long *)&tb->jmp_dest[n], (uintptr_t)NULL, (uintptr_t)tb_next);
#else
old = atomic_cmpxchg(&tb->jmp_dest[n], (uintptr_t)NULL, (uintptr_t)tb_next);
#endif
if (old) {
goto out_unlock_next;
}
/* patch the native jump address */
tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc.ptr);
/* add in TB jmp list */
tb->jmp_list_next[n] = tb_next->jmp_list_head;
tb_next->jmp_list_head = (uintptr_t)tb | n;
return;
out_unlock_next:
return;
}
static inline TranslationBlock *tb_find(CPUState *cpu,
TranslationBlock *last_tb,
int tb_exit, uint32_t cf_mask)
{
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
uc_tb cur_tb, prev_tb;
uc_engine *uc = cpu->uc;
struct list_item *cur;
struct hook *hook;
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, cf_mask);
if (tb == NULL) {
mmap_lock();
tb = tb_gen_code(cpu, pc, cs_base, flags, cf_mask);
mmap_unlock();
/* We add the TB in the virtual pc hash table for the fast lookup */
cpu->tb_jmp_cache[tb_jmp_cache_hash_func(cpu->uc, pc)] = tb;
if (uc->last_tb) {
UC_TB_COPY(&cur_tb, tb);
UC_TB_COPY(&prev_tb, uc->last_tb);
for (cur = uc->hook[UC_HOOK_EDGE_GENERATED_IDX].head;
cur != NULL && (hook = (struct hook *)cur->data); cur = cur->next) {
if (hook->to_delete) {
continue;
}
if (HOOK_BOUND_CHECK(hook, (uint64_t)tb->pc)) {
((uc_hook_edge_gen_t)hook->callback)(uc, &cur_tb, &prev_tb, hook->user_data);
}
}
}
}
/* We don't take care of direct jumps when address mapping changes in
* system emulation. So it's not safe to make a direct jump to a TB
* spanning two pages because the mapping for the second page can change.
*/
if (tb->page_addr[1] != -1) {
last_tb = NULL;
}
/* See if we can patch the calling TB. */
if (last_tb) {
tb_add_jump(last_tb, tb_exit, tb);
}
return tb;
}
static inline bool cpu_handle_halt(CPUState *cpu)
{
if (cpu->halted) {
#if 0
#if defined(TARGET_I386)
if ((cpu->interrupt_request & CPU_INTERRUPT_POLL)
&& replay_interrupt()) {
X86CPU *x86_cpu = X86_CPU(cpu);
apic_poll_irq(x86_cpu->apic_state);
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
}
#endif
#endif
if (!cpu_has_work(cpu)) {
return true;
}
cpu->halted = 0;
}
return false;
}
static inline void cpu_handle_debug_exception(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) {
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
wp->flags &= ~BP_WATCHPOINT_HIT;
}
}
cc->debug_excp_handler(cpu);
}
static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
{
bool catched = false;
struct uc_struct *uc = cpu->uc;
struct hook *hook;
// printf(">> exception index = %u\n", cpu->exception_index); qq
if (cpu->uc->stop_interrupt && cpu->uc->stop_interrupt(cpu->uc, cpu->exception_index)) {
// Unicorn: call registered invalid instruction callbacks
catched = false;
HOOK_FOREACH_VAR_DECLARE;
HOOK_FOREACH(uc, hook, UC_HOOK_INSN_INVALID) {
if (hook->to_delete) {
continue;
}
catched = ((uc_cb_hookinsn_invalid_t)hook->callback)(uc, hook->user_data);
if (catched) {
break;
}
}
if (!catched) {
uc->invalid_error = UC_ERR_INSN_INVALID;
}
// we want to stop emulation
*ret = EXCP_HLT;
return true;
}
if (cpu->exception_index < 0) {
return false;
}
if (cpu->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */
*ret = cpu->exception_index;
if (*ret == EXCP_DEBUG) {
cpu_handle_debug_exception(cpu);
}
cpu->exception_index = -1;
return true;
} else {
#if defined(TARGET_X86_64)
CPUArchState *env = cpu->env_ptr;
if (env->exception_is_int) {
// point EIP to the next instruction after INT
env->eip = env->exception_next_eip;
}
#endif
#if defined(TARGET_MIPS) || defined(TARGET_MIPS64)
// Unicorn: Imported from https://github.com/unicorn-engine/unicorn/pull/1098
CPUMIPSState *env = &(MIPS_CPU(cpu)->env);
env->active_tc.PC = uc->next_pc;
#endif
#if defined(TARGET_RISCV)
CPURISCVState *env = &(RISCV_CPU(uc->cpu)->env);
env->pc += 4;
#endif
// Unicorn: call registered interrupt callbacks
catched = false;
HOOK_FOREACH_VAR_DECLARE;
HOOK_FOREACH(uc, hook, UC_HOOK_INTR) {
if (hook->to_delete) {
continue;
}
((uc_cb_hookintr_t)hook->callback)(uc, cpu->exception_index, hook->user_data);
catched = true;
}
// Unicorn: If un-catched interrupt, stop executions.
if (!catched) {
// printf("AAAAAAAAAAAA\n"); qq
uc->invalid_error = UC_ERR_EXCEPTION;
cpu->halted = 1;
*ret = EXCP_HLT;
return true;
}
cpu->exception_index = -1;
}
*ret = EXCP_INTERRUPT;
return false;
}
static inline bool cpu_handle_interrupt(CPUState *cpu,
TranslationBlock **last_tb)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
/* Clear the interrupt flag now since we're processing
* cpu->interrupt_request and cpu->exit_request.
* Ensure zeroing happens before reading cpu->exit_request or
* cpu->interrupt_request (see also smp_wmb in cpu_exit())
*/
cpu_neg(cpu)->icount_decr.u16.high = 0;
if (unlikely(cpu->interrupt_request)) {
int interrupt_request;
interrupt_request = cpu->interrupt_request;
if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
/* Mask out external interrupts for this step. */
interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
}
if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG;
return true;
}
#if defined(TARGET_I386)
else if (interrupt_request & CPU_INTERRUPT_INIT) {
X86CPU *x86_cpu = X86_CPU(cpu);
CPUArchState *env = &x86_cpu->env;
//replay_interrupt();
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED;
return true;
}
#else
else if (interrupt_request & CPU_INTERRUPT_RESET) {
//replay_interrupt();
cpu_reset(cpu);
return true;
}
#endif
/* The target hook has 3 exit conditions:
False when the interrupt isn't processed,
True when it is, and we should restart on a new TB,
and via longjmp via cpu_loop_exit. */
else {
if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
//replay_interrupt();
cpu->exception_index = -1;
*last_tb = NULL;
}
/* The target hook may have updated the 'cpu->interrupt_request';
* reload the 'interrupt_request' value */
interrupt_request = cpu->interrupt_request;
}
if (interrupt_request & CPU_INTERRUPT_EXITTB) {
cpu->interrupt_request &= ~CPU_INTERRUPT_EXITTB;
/* ensure that no TB jump will be modified as
the program flow was changed */
*last_tb = NULL;
}
}
/* Finally, check if we need to exit to the main loop. */
if (unlikely(cpu->exit_request)) {
cpu->exit_request = 0;
if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT;
}
return true;
}
return false;
}
static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
TranslationBlock **last_tb, int *tb_exit)
{
uintptr_t ret;
int32_t insns_left;
// trace_exec_tb(tb, tb->pc);
ret = cpu_tb_exec(cpu, tb);
cpu->uc->last_tb = tb; // Trace the last tb we executed.
tb = (TranslationBlock *)(ret & ~TB_EXIT_MASK);
*tb_exit = ret & TB_EXIT_MASK;
if (*tb_exit != TB_EXIT_REQUESTED) {
*last_tb = tb;
return;
}
*last_tb = NULL;
insns_left = cpu_neg(cpu)->icount_decr.u32;
if (insns_left < 0) {
/* Something asked us to stop executing chained TBs; just
* continue round the main loop. Whatever requested the exit
* will also have set something else (eg exit_request or
* interrupt_request) which will be handled by
* cpu_handle_interrupt. cpu_handle_interrupt will also
* clear cpu->icount_decr.u16.high.
*/
return;
}
/* Instruction counter expired. */
/* Refill decrementer and continue execution. */
insns_left = MIN(0xffff, cpu->icount_budget);
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
if (!cpu->icount_extra) {
/* Execute any remaining instructions, then let the main loop
* handle the next event.
*/
if (insns_left > 0) {
cpu_exec_nocache(cpu, insns_left, tb, false);
}
}
}
/* main execution loop */
int cpu_exec(struct uc_struct *uc, CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
int ret;
// SyncClocks sc = { 0 };
if (cpu_handle_halt(cpu)) {
return EXCP_HALTED;
}
// rcu_read_lock();
cc->cpu_exec_enter(cpu);
/* Calculate difference between guest clock and host clock.
* This delay includes the delay of the last cycle, so
* what we have to do is sleep until it is 0. As for the
* advance/delay we gain here, we try to fix it next time.
*/
// init_delay_params(&sc, cpu);
// Unicorn: We would like to support nested uc_emu_start calls.
/* prepare setjmp context for exception handling */
// if (sigsetjmp(cpu->jmp_env, 0) != 0) {
if (sigsetjmp(uc->jmp_bufs[uc->nested_level - 1], 0) != 0) {
#if defined(__clang__) || !QEMU_GNUC_PREREQ(4, 6)
/* Some compilers wrongly smash all local variables after
* siglongjmp. There were bug reports for gcc 4.5.0 and clang.
* Reload essential local variables here for those compilers.
* Newer versions of gcc would complain about this code (-Wclobbered). */
cc = CPU_GET_CLASS(cpu);
#else /* buggy compiler */
/* Assert that the compiler does not smash local variables. */
// g_assert(cpu == current_cpu);
g_assert(cc == CPU_GET_CLASS(cpu));
#endif /* buggy compiler */
assert_no_pages_locked();
}
/* if an exception is pending, we execute it here */
while (!cpu_handle_exception(cpu, &ret)) {
TranslationBlock *last_tb = NULL;
int tb_exit = 0;
while (!cpu_handle_interrupt(cpu, &last_tb)) {
uint32_t cflags = cpu->cflags_next_tb;
TranslationBlock *tb;
/* When requested, use an exact setting for cflags for the next
execution. This is used for icount, precise smc, and stop-
after-access watchpoints. Since this request should never
have CF_INVALID set, -1 is a convenient invalid value that
does not require tcg headers for cpu_common_reset. */
if (cflags == -1) {
cflags = curr_cflags();
} else {
cpu->cflags_next_tb = -1;
}
tb = tb_find(cpu, last_tb, tb_exit, cflags);
cpu_loop_exec_tb(cpu, tb, &last_tb, &tb_exit);
/* Try to align the host and virtual clocks
if the guest is in advance */
// align_clocks(&sc, cpu);
}
}
// Unicorn: Clear any TCG exit flag that might have been left set by exit requests
uc->cpu->tcg_exit_req = 0;
cc->cpu_exec_exit(cpu);
// rcu_read_unlock();
return ret;
}

2420
qemu/accel/tcg/cputlb.c Normal file

File diff suppressed because it is too large Load Diff

39
qemu/accel/tcg/tcg-all.c Normal file
View File

@ -0,0 +1,39 @@
/*
* QEMU System Emulator, accelerator interfaces
*
* Copyright (c) 2003-2008 Fabrice Bellard
* Copyright (c) 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "sysemu/tcg.h"
#include "cpu.h"
#include "sysemu/cpus.h"
#include "tcg/tcg.h"
/* mask must never be zero, except for A20 change call */
static void tcg_handle_interrupt(CPUState *cpu, int mask)
{
cpu->interrupt_request |= mask;
}
CPUInterruptHandler cpu_interrupt_handler = tcg_handle_interrupt;

File diff suppressed because it is too large Load Diff

View File

@ -21,19 +21,16 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "unicorn/platform.h"
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "cpu.h"
#include "exec/helper-proto.h"
#include "exec/cpu_ldst.h"
#include "exec/exec-all.h"
#include "exec/tb-lookup.h"
#include "tcg/tcg.h"
/* This file is compiled once, and thus we can't include the standard
"exec/helper-proto.h", which has includes that are target specific. */
#include "exec/helper-head.h"
#define DEF_HELPER_FLAGS_2(name, flags, ret, t1, t2) \
dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2));
#include "tcg-runtime.h"
#include <uc_priv.h>
/* 32-bit helpers */
@ -107,3 +104,63 @@ int64_t HELPER(mulsh_i64)(int64_t arg1, int64_t arg2)
muls64(&l, &h, arg1, arg2);
return h;
}
uint32_t HELPER(clz_i32)(uint32_t arg, uint32_t zero_val)
{
return arg ? clz32(arg) : zero_val;
}
uint32_t HELPER(ctz_i32)(uint32_t arg, uint32_t zero_val)
{
return arg ? ctz32(arg) : zero_val;
}
uint64_t HELPER(clz_i64)(uint64_t arg, uint64_t zero_val)
{
return arg ? clz64(arg) : zero_val;
}
uint64_t HELPER(ctz_i64)(uint64_t arg, uint64_t zero_val)
{
return arg ? ctz64(arg) : zero_val;
}
uint32_t HELPER(clrsb_i32)(uint32_t arg)
{
return clrsb32(arg);
}
uint64_t HELPER(clrsb_i64)(uint64_t arg)
{
return clrsb64(arg);
}
uint32_t HELPER(ctpop_i32)(uint32_t arg)
{
return ctpop32(arg);
}
uint64_t HELPER(ctpop_i64)(uint64_t arg)
{
return ctpop64(arg);
}
void *HELPER(lookup_tb_ptr)(CPUArchState *env)
{
CPUState *cpu = env_cpu(env);
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
struct uc_struct *uc = (struct uc_struct *)cpu->uc;
tb = tb_lookup__cpu_state(cpu, &pc, &cs_base, &flags, curr_cflags());
if (tb == NULL) {
return uc->tcg_ctx->code_gen_epilogue;
}
return tb->tc.ptr;
}
void HELPER(exit_atomic)(CPUArchState *env)
{
cpu_loop_exit_atomic(env_cpu(env), GETPC());
}

View File

@ -0,0 +1,261 @@
DEF_HELPER_FLAGS_2(div_i32, TCG_CALL_NO_RWG_SE, s32, s32, s32)
DEF_HELPER_FLAGS_2(rem_i32, TCG_CALL_NO_RWG_SE, s32, s32, s32)
DEF_HELPER_FLAGS_2(divu_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(remu_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(div_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(rem_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(divu_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(remu_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(shl_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(shr_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(sar_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(mulsh_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(muluh_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(clz_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(ctz_i32, TCG_CALL_NO_RWG_SE, i32, i32, i32)
DEF_HELPER_FLAGS_2(clz_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(ctz_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_1(clrsb_i32, TCG_CALL_NO_RWG_SE, i32, i32)
DEF_HELPER_FLAGS_1(clrsb_i64, TCG_CALL_NO_RWG_SE, i64, i64)
DEF_HELPER_FLAGS_1(ctpop_i32, TCG_CALL_NO_RWG_SE, i32, i32)
DEF_HELPER_FLAGS_1(ctpop_i64, TCG_CALL_NO_RWG_SE, i64, i64)
DEF_HELPER_FLAGS_1(lookup_tb_ptr, TCG_CALL_NO_WG_SE, ptr, env)
DEF_HELPER_FLAGS_1(exit_atomic, TCG_CALL_NO_WG, noreturn, env)
DEF_HELPER_FLAGS_5(atomic_cmpxchgb, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgw_be, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgw_le, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgl_be, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgl_le, TCG_CALL_NO_WG,
i32, env, tl, i32, i32, i32)
#ifdef CONFIG_ATOMIC64
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_be, TCG_CALL_NO_WG,
i64, env, tl, i64, i64, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_le, TCG_CALL_NO_WG,
i64, env, tl, i64, i64, i32)
#endif
#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), q_le), \
TCG_CALL_NO_WG, i64, env, tl, i64, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), q_be), \
TCG_CALL_NO_WG, i64, env, tl, i64, i32)
#else
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_le), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_be), \
TCG_CALL_NO_WG, i32, env, tl, i32, i32)
#endif /* CONFIG_ATOMIC64 */
GEN_ATOMIC_HELPERS(fetch_add)
GEN_ATOMIC_HELPERS(fetch_and)
GEN_ATOMIC_HELPERS(fetch_or)
GEN_ATOMIC_HELPERS(fetch_xor)
GEN_ATOMIC_HELPERS(fetch_smin)
GEN_ATOMIC_HELPERS(fetch_umin)
GEN_ATOMIC_HELPERS(fetch_smax)
GEN_ATOMIC_HELPERS(fetch_umax)
GEN_ATOMIC_HELPERS(add_fetch)
GEN_ATOMIC_HELPERS(and_fetch)
GEN_ATOMIC_HELPERS(or_fetch)
GEN_ATOMIC_HELPERS(xor_fetch)
GEN_ATOMIC_HELPERS(smin_fetch)
GEN_ATOMIC_HELPERS(umin_fetch)
GEN_ATOMIC_HELPERS(smax_fetch)
GEN_ATOMIC_HELPERS(umax_fetch)
GEN_ATOMIC_HELPERS(xchg)
#undef GEN_ATOMIC_HELPERS
DEF_HELPER_FLAGS_3(gvec_mov, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_dup8, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup16, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup32, TCG_CALL_NO_RWG, void, ptr, i32, i32)
DEF_HELPER_FLAGS_3(gvec_dup64, TCG_CALL_NO_RWG, void, ptr, i32, i64)
DEF_HELPER_FLAGS_4(gvec_add8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_add64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_adds8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_adds64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_sub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_subs8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_subs64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_mul8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_mul64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_muls8, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls16, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls32, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_muls64, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ssadd64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sssub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_usadd64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ussub64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smin8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smin16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smin32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smin64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smax8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smax16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smax32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_smax64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umin8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umin16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umin32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umin64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umax8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umax16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umax32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_umax64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg8, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg16, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg32, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_neg64, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_abs8, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_abs16, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_abs32, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_abs64, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_not, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_and, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_or, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_xor, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_andc, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_orc, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_nand, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_nor, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eqv, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ands, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_xors, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_4(gvec_ors, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
DEF_HELPER_FLAGS_3(gvec_shl8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shl64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_shr64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar8i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar16i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar32i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_3(gvec_sar64i, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_shl8v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_shl16v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_shl32v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_shl64v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_shr8v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_shr16v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_shr32v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_shr64v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sar8v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sar16v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sar32v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sar64v, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_eq64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ne64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_lt64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_le64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ltu64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu8, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu16, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_leu64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_bitsel, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,36 @@
/*
* Translated block handling
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#ifndef TRANSLATE_ALL_H
#define TRANSLATE_ALL_H
#include "exec/exec-all.h"
/* translate-all.c */
struct page_collection *page_collection_lock(struct uc_struct *uc, tb_page_addr_t start,
tb_page_addr_t end);
void page_collection_unlock(struct page_collection *set);
void tb_invalidate_phys_page_fast(struct uc_struct *uc, struct page_collection *pages,
tb_page_addr_t start, int len,
uintptr_t retaddr);
void tb_invalidate_phys_page_range(struct uc_struct *uc, tb_page_addr_t start, tb_page_addr_t end);
void tb_invalidate_phys_range(struct uc_struct *uc, ram_addr_t start, ram_addr_t end);
void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr);
#endif /* TRANSLATE_ALL_H */

168
qemu/accel/tcg/translator.c Normal file
View File

@ -0,0 +1,168 @@
/*
* Generic intermediate code generation.
*
* Copyright (C) 2016-2017 Lluís Vilanova <vilanova@ac.upc.edu>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "tcg/tcg.h"
#include "tcg/tcg-op.h"
#include "exec/exec-all.h"
#include "exec/gen-icount.h"
#include "exec/translator.h"
#include <uc_priv.h>
/* Pairs with tcg_clear_temp_count.
To be called by #TranslatorOps.{translate_insn,tb_stop} if
(1) the target is sufficiently clean to support reporting,
(2) as and when all temporaries are known to be consumed.
For most targets, (2) is at the end of translate_insn. */
void translator_loop_temp_check(DisasContextBase *db)
{
#if 0
if (tcg_check_temp_count()) {
qemu_log("warning: TCG temporary leaks before "
TARGET_FMT_lx "\n", db->pc_next);
}
#endif
}
void translator_loop(const TranslatorOps *ops, DisasContextBase *db,
CPUState *cpu, TranslationBlock *tb, int max_insns)
{
int bp_insn = 0;
struct uc_struct *uc = (struct uc_struct *)cpu->uc;
TCGContext *tcg_ctx = uc->tcg_ctx;
TCGOp *prev_op = NULL;
bool block_hook = false;
/* Initialize DisasContext */
db->tb = tb;
db->pc_first = tb->pc;
db->pc_next = db->pc_first;
db->is_jmp = DISAS_NEXT;
db->num_insns = 0;
db->max_insns = max_insns;
db->singlestep_enabled = cpu->singlestep_enabled;
ops->init_disas_context(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Reset the temp count so that we can identify leaks */
tcg_clear_temp_count();
/* Unicorn: early check to see if the address of this block is
* the "run until" address. */
if (uc_addr_is_exit(uc, tb->pc)) {
// This should catch that instruction is at the end
// and generate appropriate halting code.
gen_tb_start(tcg_ctx, db->tb);
ops->tb_start(db, cpu);
db->num_insns++;
ops->insn_start(db, cpu);
ops->translate_insn(db, cpu);
goto _end_loop;
}
/* Unicorn: trace this block on request
* Only hook this block if it is not broken from previous translation due to
* full translation cache
*/
if (HOOK_EXISTS_BOUNDED(uc, UC_HOOK_BLOCK, tb->pc)) {
prev_op = tcg_last_op(tcg_ctx);
block_hook = true;
gen_uc_tracecode(tcg_ctx, 0xf8f8f8f8, UC_HOOK_BLOCK_IDX, uc, db->pc_first);
}
// tcg_dump_ops(tcg_ctx, false, "translator loop");
/* Start translating. */
gen_tb_start(tcg_ctx, db->tb);
// tcg_dump_ops(tcg_ctx, false, "tb start");
ops->tb_start(db, cpu);
// tcg_dump_ops(tcg_ctx, false, "tb start 2");
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
while (true) {
db->num_insns++;
ops->insn_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Pass breakpoint hits to target for further processing */
if (!db->singlestep_enabled
&& unlikely(!QTAILQ_EMPTY(&cpu->breakpoints))) {
CPUBreakpoint *bp;
QTAILQ_FOREACH(bp, &cpu->breakpoints, entry) {
if (bp->pc == db->pc_next) {
if (ops->breakpoint_check(db, cpu, bp)) {
bp_insn = 1;
break;
}
}
}
/* The breakpoint_check hook may use DISAS_TOO_MANY to indicate
that only one more instruction is to be executed. Otherwise
it should use DISAS_NORETURN when generating an exception,
but may use a DISAS_TARGET_* value for Something Else. */
if (db->is_jmp > DISAS_TOO_MANY) {
break;
}
}
/* Disassemble one instruction. The translate_insn hook should
update db->pc_next and db->is_jmp to indicate what should be
done next -- either exiting this loop or locate the start of
the next instruction. */
ops->translate_insn(db, cpu);
// tcg_dump_ops(tcg_ctx, false, "insn translate");
/* Stop translation if translate_insn so indicated. */
if (db->is_jmp != DISAS_NEXT) {
break;
}
/* Stop translation if the output buffer is full,
or we have executed all of the allowed instructions. */
if (tcg_op_buf_full(tcg_ctx) || db->num_insns >= db->max_insns) {
db->is_jmp = DISAS_TOO_MANY;
break;
}
}
_end_loop:
/* Emit code to exit the TB, as indicated by db->is_jmp. */
ops->tb_stop(db, cpu);
gen_tb_end(tcg_ctx, db->tb, db->num_insns - bp_insn);
// tcg_dump_ops(tcg_ctx, false, "tb end");
/* The disas_log hook may use these values rather than recompute. */
db->tb->size = db->pc_next - db->pc_first;
db->tb->icount = db->num_insns;
if (block_hook) {
TCGOp *tcg_op;
// Unicorn: patch the callback to have the proper block size.
if (prev_op) {
// As explained further up in the function where prev_op is
// assigned, we move forward in the tail queue, so we're modifying the
// move instruction generated by gen_uc_tracecode() that contains
// the instruction size to assign the proper size (replacing 0xF1F1F1F1).
tcg_op = QTAILQ_NEXT(prev_op, link);
} else {
// this basic block is the first emulated code ever,
// so the basic block operand is the first operand
tcg_op = QTAILQ_FIRST(&tcg_ctx->ops);
}
tcg_op->args[1] = db->tb->size;
}
}

4855
qemu/arm.h

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

2071
qemu/configure vendored

File diff suppressed because it is too large Load Diff

View File

@ -1,463 +0,0 @@
/*
* emulator main execution loop
*
* Copyright (c) 2003-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
/* Modified for Unicorn Engine by Nguyen Anh Quynh, 2015 */
#include "tcg.h"
#include "sysemu/sysemu.h"
#include "uc_priv.h"
static tcg_target_ulong cpu_tb_exec(CPUState *cpu, uint8_t *tb_ptr);
static TranslationBlock *tb_find_slow(CPUArchState *env, target_ulong pc,
target_ulong cs_base, uint64_t flags);
static TranslationBlock *tb_find_fast(CPUArchState *env);
static void cpu_handle_debug_exception(CPUArchState *env);
void cpu_loop_exit(CPUState *cpu)
{
cpu->current_tb = NULL;
siglongjmp(cpu->jmp_env, 1);
}
/* exit the current TB from a signal handler. The host registers are
restored in a state compatible with the CPU emulator
*/
void cpu_resume_from_signal(CPUState *cpu, void *puc)
{
/* XXX: restore cpu registers saved in host registers */
cpu->exception_index = -1;
siglongjmp(cpu->jmp_env, 1);
}
/* main execution loop */
int cpu_exec(struct uc_struct *uc, CPUArchState *env) // qq
{
CPUState *cpu = ENV_GET_CPU(env);
TCGContext *tcg_ctx = env->uc->tcg_ctx;
CPUClass *cc = CPU_GET_CLASS(uc, cpu);
#ifdef TARGET_I386
X86CPU *x86_cpu = X86_CPU(uc, cpu);
#endif
int ret = 0, interrupt_request;
TranslationBlock *tb;
uint8_t *tc_ptr;
uintptr_t next_tb;
struct hook *hook;
if (cpu->halted) {
if (!cpu_has_work(cpu)) {
return EXCP_HALTED;
}
cpu->halted = 0;
}
uc->current_cpu = cpu;
/* As long as current_cpu is null, up to the assignment just above,
* requests by other threads to exit the execution loop are expected to
* be issued using the exit_request global. We must make sure that our
* evaluation of the global value is performed past the current_cpu
* value transition point, which requires a memory barrier as well as
* an instruction scheduling constraint on modern architectures. */
smp_mb();
if (unlikely(uc->exit_request)) {
cpu->exit_request = 1;
}
cc->cpu_exec_enter(cpu);
cpu->exception_index = -1;
env->invalid_error = UC_ERR_OK;
/* prepare setjmp context for exception handling */
for(;;) {
if (sigsetjmp(cpu->jmp_env, 0) == 0) {
if (uc->stop_request || uc->invalid_error) {
break;
}
/* if an exception is pending, we execute it here */
if (cpu->exception_index >= 0) {
//printf(">>> GOT INTERRUPT. exception idx = %x\n", cpu->exception_index); // qq
if (cpu->exception_index >= EXCP_INTERRUPT) {
/* exit request from the cpu execution loop */
ret = cpu->exception_index;
if (ret == EXCP_DEBUG) {
cpu_handle_debug_exception(env);
}
break;
} else {
bool catched = false;
#if defined(CONFIG_USER_ONLY)
/* if user mode only, we simulate a fake exception
which will be handled outside the cpu execution
loop */
#if defined(TARGET_I386)
cc->do_interrupt(cpu);
#endif
ret = cpu->exception_index;
break;
#else
#if defined(TARGET_X86_64)
if (env->exception_is_int) {
// point EIP to the next instruction after INT
env->eip = env->exception_next_eip;
}
#endif
#if defined(TARGET_MIPS) || defined(TARGET_MIPS64)
env->active_tc.PC = uc->next_pc;
#endif
if (uc->stop_interrupt && uc->stop_interrupt(cpu->exception_index)) {
// Unicorn: call registered invalid instruction callbacks
HOOK_FOREACH_VAR_DECLARE;
HOOK_FOREACH(uc, hook, UC_HOOK_INSN_INVALID) {
if (hook->to_delete)
continue;
catched = ((uc_cb_hookinsn_invalid_t)hook->callback)(uc, hook->user_data);
if (catched)
break;
}
if (!catched)
uc->invalid_error = UC_ERR_INSN_INVALID;
} else {
// Unicorn: call registered interrupt callbacks
HOOK_FOREACH_VAR_DECLARE;
HOOK_FOREACH(uc, hook, UC_HOOK_INTR) {
if (hook->to_delete)
continue;
((uc_cb_hookintr_t)hook->callback)(uc, cpu->exception_index, hook->user_data);
catched = true;
}
if (!catched)
uc->invalid_error = UC_ERR_EXCEPTION;
}
// Unicorn: If un-catched interrupt, stop executions.
if (!catched) {
cpu->halted = 1;
ret = EXCP_HLT;
break;
}
cpu->exception_index = -1;
#endif
}
}
next_tb = 0; /* force lookup of first TB */
for(;;) {
interrupt_request = cpu->interrupt_request;
if (unlikely(interrupt_request)) {
if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
/* Mask out external interrupts for this step. */
interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
}
if (interrupt_request & CPU_INTERRUPT_DEBUG) {
cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
cpu->exception_index = EXCP_DEBUG;
cpu_loop_exit(cpu);
}
if (interrupt_request & CPU_INTERRUPT_HALT) {
cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
cpu->halted = 1;
cpu->exception_index = EXCP_HLT;
cpu_loop_exit(cpu);
}
#if defined(TARGET_I386)
if (interrupt_request & CPU_INTERRUPT_INIT) {
cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0);
do_cpu_init(x86_cpu);
cpu->exception_index = EXCP_HALTED;
cpu_loop_exit(cpu);
}
#else
if (interrupt_request & CPU_INTERRUPT_RESET) {
cpu_reset(cpu);
}
#endif
/* The target hook has 3 exit conditions:
False when the interrupt isn't processed,
True when it is, and we should restart on a new TB,
and via longjmp via cpu_loop_exit. */
if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
next_tb = 0;
}
/* Don't use the cached interrupt_request value,
do_interrupt may have updated the EXITTB flag. */
if (cpu->interrupt_request & CPU_INTERRUPT_EXITTB) {
cpu->interrupt_request &= ~CPU_INTERRUPT_EXITTB;
/* ensure that no TB jump will be modified as
the program flow was changed */
next_tb = 0;
}
}
if (unlikely(cpu->exit_request)) {
cpu->exit_request = 0;
cpu->exception_index = EXCP_INTERRUPT;
cpu_loop_exit(cpu);
}
tb = tb_find_fast(env); // qq
if (!tb) { // invalid TB due to invalid code?
uc->invalid_error = UC_ERR_FETCH_UNMAPPED;
ret = EXCP_HLT;
break;
}
/* Note: we do it here to avoid a gcc bug on Mac OS X when
doing it in tb_find_slow */
if (tcg_ctx->tb_ctx.tb_invalidated_flag) {
/* as some TB could have been invalidated because
of memory exceptions while generating the code, we
must recompute the hash index here */
next_tb = 0;
tcg_ctx->tb_ctx.tb_invalidated_flag = 0;
}
/* see if we can patch the calling TB. When the TB
spans two pages, we cannot safely do a direct
jump. */
if (next_tb != 0 && tb->page_addr[1] == -1) {
tb_add_jump((TranslationBlock *)(next_tb & ~TB_EXIT_MASK),
next_tb & TB_EXIT_MASK, tb);
}
/* cpu_interrupt might be called while translating the
TB, but before it is linked into a potentially
infinite loop and becomes env->current_tb. Avoid
starting execution if there is a pending interrupt. */
cpu->current_tb = tb;
barrier();
if (likely(!cpu->exit_request)) {
tc_ptr = tb->tc_ptr;
/* execute the generated code */
next_tb = cpu_tb_exec(cpu, tc_ptr); // qq
switch (next_tb & TB_EXIT_MASK) {
case TB_EXIT_REQUESTED:
/* Something asked us to stop executing
* chained TBs; just continue round the main
* loop. Whatever requested the exit will also
* have set something else (eg exit_request or
* interrupt_request) which we will handle
* next time around the loop.
*/
tb = (TranslationBlock *)(next_tb & ~TB_EXIT_MASK);
next_tb = 0;
break;
default:
break;
}
}
cpu->current_tb = NULL;
/* reset soft MMU for next block (it can currently
only be set by a memory fault) */
} /* for(;;) */
} else {
/* Reload env after longjmp - the compiler may have smashed all
* local variables as longjmp is marked 'noreturn'. */
cpu = uc->current_cpu;
env = cpu->env_ptr;
cc = CPU_GET_CLASS(uc, cpu);
#ifdef TARGET_I386
x86_cpu = X86_CPU(uc, cpu);
#endif
}
} /* for(;;) */
// Unicorn: Clear any TCG exit flag that might have been left set by exit requests
uc->current_cpu->tcg_exit_req = 0;
cc->cpu_exec_exit(cpu);
// Unicorn: flush JIT cache to because emulation might stop in
// the middle of translation, thus generate incomplete code.
// TODO: optimize this for better performance
tb_flush(env);
/* fail safe : never use current_cpu outside cpu_exec() */
// uc->current_cpu = NULL;
return ret;
}
/* Execute a TB, and fix up the CPU state afterwards if necessary */
static tcg_target_ulong cpu_tb_exec(CPUState *cpu, uint8_t *tb_ptr)
{
CPUArchState *env = cpu->env_ptr;
TCGContext *tcg_ctx = env->uc->tcg_ctx;
uintptr_t next_tb;
next_tb = tcg_qemu_tb_exec(env, tb_ptr);
if ((next_tb & TB_EXIT_MASK) > TB_EXIT_IDX1) {
/* We didn't start executing this TB (eg because the instruction
* counter hit zero); we must restore the guest PC to the address
* of the start of the TB.
*/
CPUClass *cc = CPU_GET_CLASS(env->uc, cpu);
TranslationBlock *tb = (TranslationBlock *)(next_tb & ~TB_EXIT_MASK);
/* Both set_pc() & synchronize_fromtb() can be ignored when code tracing hook is installed,
* or timer mode is in effect, since these already fix the PC.
*/
if (!HOOK_EXISTS(env->uc, UC_HOOK_CODE) && !env->uc->timeout) {
// We should sync pc for R/W error.
switch (env->invalid_error) {
case UC_ERR_WRITE_PROT:
case UC_ERR_READ_PROT:
case UC_ERR_FETCH_PROT:
case UC_ERR_WRITE_UNMAPPED:
case UC_ERR_READ_UNMAPPED:
case UC_ERR_FETCH_UNMAPPED:
case UC_ERR_WRITE_UNALIGNED:
case UC_ERR_READ_UNALIGNED:
case UC_ERR_FETCH_UNALIGNED:
break;
default:
if (cc->synchronize_from_tb) {
// avoid sync twice when helper_uc_tracecode() already did this.
if (env->uc->emu_counter <= env->uc->emu_count &&
!env->uc->stop_request && !env->uc->quit_request)
cc->synchronize_from_tb(cpu, tb);
} else {
assert(cc->set_pc);
// avoid sync twice when helper_uc_tracecode() already did this.
if (env->uc->emu_counter <= env->uc->emu_count &&
!env->uc->stop_request && !env->uc->quit_request)
cc->set_pc(cpu, tb->pc);
}
}
}
}
if ((next_tb & TB_EXIT_MASK) == TB_EXIT_REQUESTED) {
/* We were asked to stop executing TBs (probably a pending
* interrupt. We've now stopped, so clear the flag.
*/
cpu->tcg_exit_req = 0;
}
return next_tb;
}
static TranslationBlock *tb_find_slow(CPUArchState *env, target_ulong pc,
target_ulong cs_base, uint64_t flags) // qq
{
CPUState *cpu = ENV_GET_CPU(env);
TCGContext *tcg_ctx = env->uc->tcg_ctx;
TranslationBlock *tb, **ptb1;
unsigned int h;
tb_page_addr_t phys_pc, phys_page1;
target_ulong virt_page2;
tcg_ctx->tb_ctx.tb_invalidated_flag = 0;
/* find translated block using physical mappings */
phys_pc = get_page_addr_code(env, pc); // qq
if (phys_pc == -1) { // invalid code?
return NULL;
}
phys_page1 = phys_pc & TARGET_PAGE_MASK;
h = tb_phys_hash_func(phys_pc);
ptb1 = &tcg_ctx->tb_ctx.tb_phys_hash[h];
for(;;) {
tb = *ptb1;
if (!tb)
goto not_found;
if (tb->pc == pc &&
tb->page_addr[0] == phys_page1 &&
tb->cs_base == cs_base &&
tb->flags == flags) {
/* check next page if needed */
if (tb->page_addr[1] != -1) {
tb_page_addr_t phys_page2;
virt_page2 = (pc & TARGET_PAGE_MASK) +
TARGET_PAGE_SIZE;
phys_page2 = get_page_addr_code(env, virt_page2);
if (tb->page_addr[1] == phys_page2)
goto found;
} else {
goto found;
}
}
ptb1 = &tb->phys_hash_next;
}
not_found:
/* if no translated code available, then translate it now */
tb = tb_gen_code(cpu, pc, cs_base, (int)flags, 0); // qq
if (tb == NULL) {
return NULL;
}
found:
/* Move the last found TB to the head of the list */
if (likely(*ptb1)) {
*ptb1 = tb->phys_hash_next;
tb->phys_hash_next = tcg_ctx->tb_ctx.tb_phys_hash[h];
tcg_ctx->tb_ctx.tb_phys_hash[h] = tb;
}
/* we add the TB in the virtual pc hash table */
cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)] = tb;
return tb;
}
static TranslationBlock *tb_find_fast(CPUArchState *env) // qq
{
CPUState *cpu = ENV_GET_CPU(env);
TranslationBlock *tb;
target_ulong cs_base, pc;
int flags;
/* we record a subset of the CPU state. It will
always be the same before a given translated block
is executed. */
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
tb = cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
if (unlikely(!tb || tb->pc != pc || tb->cs_base != cs_base ||
tb->flags != flags)) {
tb = tb_find_slow(env, pc, cs_base, flags); // qq
}
return tb;
}
static void cpu_handle_debug_exception(CPUArchState *env)
{
CPUState *cpu = ENV_GET_CPU(env);
CPUClass *cc = CPU_GET_CLASS(env->uc, cpu);
CPUWatchpoint *wp;
if (!cpu->watchpoint_hit) {
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
wp->flags &= ~BP_WATCHPOINT_HIT;
}
}
cc->debug_excp_handler(cpu);
}

View File

@ -1,426 +0,0 @@
/*
* Common CPU TLB handling
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
/* Modified for Unicorn Engine by Nguyen Anh Quynh, 2015 */
#include "config.h"
#include "cpu.h"
#include "exec/exec-all.h"
#include "exec/memory.h"
#include "exec/address-spaces.h"
#include "exec/cpu_ldst.h"
#include "exec/cputlb.h"
#include "exec/memory-internal.h"
#include "exec/ram_addr.h"
#include "tcg/tcg.h"
#include "uc_priv.h"
//#define DEBUG_TLB
//#define DEBUG_TLB_CHECK
static void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr);
static bool tlb_is_dirty_ram(CPUTLBEntry *tlbe);
static bool qemu_ram_addr_from_host_nofail(struct uc_struct *uc, void *ptr, ram_addr_t *addr);
static void tlb_add_large_page(CPUArchState *env, target_ulong vaddr,
target_ulong size);
static void tlb_set_dirty1(CPUTLBEntry *tlb_entry, target_ulong vaddr);
/* statistics */
//int tlb_flush_count;
/* NOTE:
* If flush_global is true (the usual case), flush all tlb entries.
* If flush_global is false, flush (at least) all tlb entries not
* marked global.
*
* Since QEMU doesn't currently implement a global/not-global flag
* for tlb entries, at the moment tlb_flush() will also flush all
* tlb entries in the flush_global == false case. This is OK because
* CPU architectures generally permit an implementation to drop
* entries from the TLB at any time, so flushing more entries than
* required is only an efficiency issue, not a correctness issue.
*/
void tlb_flush(CPUState *cpu, int flush_global)
{
CPUArchState *env = cpu->env_ptr;
#if defined(DEBUG_TLB)
printf("tlb_flush:\n");
#endif
/* must reset current TB so that interrupts cannot modify the
links while we are modifying them */
cpu->current_tb = NULL;
memset(env->tlb_table, -1, sizeof(env->tlb_table));
memset(env->tlb_v_table, -1, sizeof(env->tlb_v_table));
memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache));
env->vtlb_index = 0;
env->tlb_flush_addr = -1;
env->tlb_flush_mask = 0;
//tlb_flush_count++;
}
void tlb_flush_page(CPUState *cpu, target_ulong addr)
{
CPUArchState *env = cpu->env_ptr;
int i;
int mmu_idx;
#if defined(DEBUG_TLB)
printf("tlb_flush_page: " TARGET_FMT_lx "\n", addr);
#endif
/* Check if we need to flush due to large pages. */
if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) {
#if defined(DEBUG_TLB)
printf("tlb_flush_page: forced full flush ("
TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
env->tlb_flush_addr, env->tlb_flush_mask);
#endif
tlb_flush(cpu, 1);
return;
}
/* must reset current TB so that interrupts cannot modify the
links while we are modifying them */
cpu->current_tb = NULL;
addr &= TARGET_PAGE_MASK;
i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr);
}
/* check whether there are entries that need to be flushed in the vtlb */
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
int k;
for (k = 0; k < CPU_VTLB_SIZE; k++) {
tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr);
}
}
tb_flush_jmp_cache(cpu, addr);
}
/* update the TLBs so that writes to code in the virtual page 'addr'
can be detected */
void tlb_protect_code(struct uc_struct *uc, ram_addr_t ram_addr)
{
cpu_physical_memory_reset_dirty(uc, ram_addr, TARGET_PAGE_SIZE,
DIRTY_MEMORY_CODE);
}
/* update the TLB so that writes in physical page 'phys_addr' are no longer
tested for self modifying code */
void tlb_unprotect_code_phys(CPUState *cpu, ram_addr_t ram_addr,
target_ulong vaddr)
{
cpu_physical_memory_set_dirty_flag(cpu->uc, ram_addr, DIRTY_MEMORY_CODE);
}
void tlb_reset_dirty_range(CPUTLBEntry *tlb_entry, uintptr_t start,
uintptr_t length)
{
uintptr_t addr;
if (tlb_is_dirty_ram(tlb_entry)) {
addr = (tlb_entry->addr_write & TARGET_PAGE_MASK) + tlb_entry->addend;
if ((addr - start) < length) {
tlb_entry->addr_write |= TLB_NOTDIRTY;
}
}
}
void cpu_tlb_reset_dirty_all(struct uc_struct *uc,
ram_addr_t start1, ram_addr_t length)
{
CPUState *cpu = uc->cpu;
CPUArchState *env;
int mmu_idx;
env = cpu->env_ptr;
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
unsigned int i;
for (i = 0; i < CPU_TLB_SIZE; i++) {
tlb_reset_dirty_range(&env->tlb_table[mmu_idx][i],
start1, length);
}
for (i = 0; i < CPU_VTLB_SIZE; i++) {
tlb_reset_dirty_range(&env->tlb_v_table[mmu_idx][i],
start1, length);
}
}
}
/* update the TLB corresponding to virtual page vaddr
so that it is no longer dirty */
void tlb_set_dirty(CPUArchState *env, target_ulong vaddr)
{
int i;
int mmu_idx;
vaddr &= TARGET_PAGE_MASK;
i = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
tlb_set_dirty1(&env->tlb_table[mmu_idx][i], vaddr);
}
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
int k;
for (k = 0; k < CPU_VTLB_SIZE; k++) {
tlb_set_dirty1(&env->tlb_v_table[mmu_idx][k], vaddr);
}
}
}
/* Add a new TLB entry. At most one entry for a given virtual address
is permitted. Only a single TARGET_PAGE_SIZE region is mapped, the
supplied size is only used by tlb_flush_page. */
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
hwaddr paddr, int prot,
int mmu_idx, target_ulong size)
{
CPUArchState *env = cpu->env_ptr;
MemoryRegionSection *section;
unsigned int index;
target_ulong address;
target_ulong code_address;
uintptr_t addend;
CPUTLBEntry *te;
hwaddr iotlb, xlat, sz;
unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE;
assert(size >= TARGET_PAGE_SIZE);
if (size != TARGET_PAGE_SIZE) {
tlb_add_large_page(env, vaddr, size);
}
sz = size;
section = address_space_translate_for_iotlb(cpu->as, paddr,
&xlat, &sz);
assert(sz >= TARGET_PAGE_SIZE);
#if defined(DEBUG_TLB)
printf("tlb_set_page: vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
" prot=%x idx=%d\n",
vaddr, paddr, prot, mmu_idx);
#endif
address = vaddr;
if (!memory_region_is_ram(section->mr) && !memory_region_is_romd(section->mr)) {
/* IO memory case */
address |= TLB_MMIO;
addend = 0;
} else {
/* TLB_MMIO for rom/romd handled below */
addend = (uintptr_t)((char*)memory_region_get_ram_ptr(section->mr) + xlat);
}
code_address = address;
iotlb = memory_region_section_get_iotlb(cpu, section, vaddr, paddr, xlat,
prot, &address);
index = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
te = &env->tlb_table[mmu_idx][index];
/* do not discard the translation in te, evict it into a victim tlb */
env->tlb_v_table[mmu_idx][vidx] = *te;
env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
/* refill the tlb */
env->iotlb[mmu_idx][index] = iotlb - vaddr;
te->addend = (uintptr_t)(addend - vaddr);
if (prot & PAGE_READ) {
te->addr_read = address;
} else {
te->addr_read = -1;
}
if (prot & PAGE_EXEC) {
te->addr_code = code_address;
} else {
te->addr_code = -1;
}
if (prot & PAGE_WRITE) {
if ((memory_region_is_ram(section->mr) && section->readonly)
|| memory_region_is_romd(section->mr)) {
/* Write access calls the I/O callback. */
te->addr_write = address | TLB_MMIO;
} else if (memory_region_is_ram(section->mr)
&& cpu_physical_memory_is_clean(cpu->uc, (ram_addr_t)(section->mr->ram_addr
+ xlat))) {
te->addr_write = address | TLB_NOTDIRTY;
} else {
te->addr_write = address;
}
} else {
te->addr_write = -1;
}
}
/* NOTE: this function can trigger an exception */
/* NOTE2: the returned address is not exactly the physical address: it
* is actually a ram_addr_t (in system mode; the user mode emulation
* version of this function returns a guest virtual address).
*/
tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
{
int mmu_idx, page_index, pd;
void *p;
MemoryRegion *mr;
ram_addr_t ram_addr;
CPUState *cpu = ENV_GET_CPU(env1);
page_index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
mmu_idx = cpu_mmu_index(env1);
if ((mmu_idx < 0) || (mmu_idx >= NB_MMU_MODES)) {
return -1;
}
if (unlikely(env1->tlb_table[mmu_idx][page_index].addr_code !=
(addr & TARGET_PAGE_MASK))) {
cpu_ldub_code(env1, addr);
//check for NX related error from softmmu
if (env1->invalid_error == UC_ERR_FETCH_PROT) {
return -1;
}
}
pd = env1->iotlb[mmu_idx][page_index] & ~TARGET_PAGE_MASK;
mr = iotlb_to_region(cpu->as, pd);
if (memory_region_is_unassigned(cpu->uc, mr)) {
CPUClass *cc = CPU_GET_CLASS(env1->uc, cpu);
if (cc->do_unassigned_access) {
cc->do_unassigned_access(cpu, addr, false, true, 0, 4);
} else {
//cpu_abort(cpu, "Trying to execute code outside RAM or ROM at 0x"
// TARGET_FMT_lx "\n", addr); // qq
env1->invalid_addr = addr;
env1->invalid_error = UC_ERR_FETCH_UNMAPPED;
return -1;
}
}
p = (void *)((uintptr_t)addr + env1->tlb_table[mmu_idx][page_index].addend);
if (!qemu_ram_addr_from_host_nofail(cpu->uc, p, &ram_addr)) {
env1->invalid_addr = addr;
env1->invalid_error = UC_ERR_FETCH_UNMAPPED;
return -1;
} else
return ram_addr;
}
static bool qemu_ram_addr_from_host_nofail(struct uc_struct *uc, void *ptr, ram_addr_t *ram_addr)
{
if (qemu_ram_addr_from_host(uc, ptr, ram_addr) == NULL) {
// fprintf(stderr, "Bad ram pointer %p\n", ptr);
return false;
}
return true;
}
static void tlb_set_dirty1(CPUTLBEntry *tlb_entry, target_ulong vaddr)
{
if (tlb_entry->addr_write == (vaddr | TLB_NOTDIRTY)) {
tlb_entry->addr_write = vaddr;
}
}
/* Our TLB does not support large pages, so remember the area covered by
large pages and trigger a full TLB flush if these are invalidated. */
static void tlb_add_large_page(CPUArchState *env, target_ulong vaddr,
target_ulong size)
{
target_ulong mask = ~(size - 1);
if (env->tlb_flush_addr == (target_ulong)-1) {
env->tlb_flush_addr = vaddr & mask;
env->tlb_flush_mask = mask;
return;
}
/* Extend the existing region to include the new page.
This is a compromise between unnecessary flushes and the cost
of maintaining a full variable size TLB. */
mask &= env->tlb_flush_mask;
while (((env->tlb_flush_addr ^ vaddr) & mask) != 0) {
mask <<= 1;
}
env->tlb_flush_addr &= mask;
env->tlb_flush_mask = mask;
}
static bool tlb_is_dirty_ram(CPUTLBEntry *tlbe)
{
return (tlbe->addr_write & (TLB_INVALID_MASK|TLB_MMIO|TLB_NOTDIRTY)) == 0;
}
static void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr)
{
if (addr == (tlb_entry->addr_read &
(TARGET_PAGE_MASK | TLB_INVALID_MASK)) ||
addr == (tlb_entry->addr_write &
(TARGET_PAGE_MASK | TLB_INVALID_MASK)) ||
addr == (tlb_entry->addr_code &
(TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
memset(tlb_entry, -1, sizeof(*tlb_entry));
}
}
#define MMUSUFFIX _mmu
#define SHIFT 0
#include "softmmu_template.h"
#define SHIFT 1
#include "softmmu_template.h"
#define SHIFT 2
#include "softmmu_template.h"
#define SHIFT 3
#include "softmmu_template.h"
#undef MMUSUFFIX
#define MMUSUFFIX _cmmu
#undef GETPC_ADJ
#define GETPC_ADJ 0
#undef GETRA
#define GETRA() ((uintptr_t)0)
#define SOFTMMU_CODE_ACCESS
#define SHIFT 0
#include "softmmu_template.h"
#define SHIFT 1
#include "softmmu_template.h"
#define SHIFT 2
#include "softmmu_template.h"
#define SHIFT 3
#include "softmmu_template.h"

View File

@ -27,8 +27,8 @@
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "qemu-common.h"
#include "qemu/aes.h"
#include "qemu/osdep.h"
#include "crypto/aes.h"
typedef uint32_t u32;
typedef uint8_t u8;
@ -1057,3 +1057,596 @@ const uint32_t AES_Td4[256] = {
0xe1e1e1e1U, 0x69696969U, 0x14141414U, 0x63636363U,
0x55555555U, 0x21212121U, 0x0c0c0c0cU, 0x7d7d7d7dU,
};
static const u32 rcon[] = {
0x01000000, 0x02000000, 0x04000000, 0x08000000,
0x10000000, 0x20000000, 0x40000000, 0x80000000,
0x1B000000, 0x36000000, /* for 128-bit blocks, Rijndael never uses more than 10 rcon values */
};
/**
* Expand the cipher key into the encryption key schedule.
*/
int AES_set_encrypt_key(const unsigned char *userKey, const int bits,
AES_KEY *key) {
u32 *rk;
int i = 0;
u32 temp;
if (!userKey || !key)
return -1;
if (bits != 128 && bits != 192 && bits != 256)
return -2;
rk = key->rd_key;
if (bits==128)
key->rounds = 10;
else if (bits==192)
key->rounds = 12;
else
key->rounds = 14;
rk[0] = GETU32(userKey );
rk[1] = GETU32(userKey + 4);
rk[2] = GETU32(userKey + 8);
rk[3] = GETU32(userKey + 12);
if (bits == 128) {
while (1) {
temp = rk[3];
rk[4] = rk[0] ^
(AES_Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(AES_Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(AES_Te4[(temp ) & 0xff] & 0x0000ff00) ^
(AES_Te4[(temp >> 24) ] & 0x000000ff) ^
rcon[i];
rk[5] = rk[1] ^ rk[4];
rk[6] = rk[2] ^ rk[5];
rk[7] = rk[3] ^ rk[6];
if (++i == 10) {
return 0;
}
rk += 4;
}
}
rk[4] = GETU32(userKey + 16);
rk[5] = GETU32(userKey + 20);
if (bits == 192) {
while (1) {
temp = rk[ 5];
rk[ 6] = rk[ 0] ^
(AES_Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(AES_Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(AES_Te4[(temp ) & 0xff] & 0x0000ff00) ^
(AES_Te4[(temp >> 24) ] & 0x000000ff) ^
rcon[i];
rk[ 7] = rk[ 1] ^ rk[ 6];
rk[ 8] = rk[ 2] ^ rk[ 7];
rk[ 9] = rk[ 3] ^ rk[ 8];
if (++i == 8) {
return 0;
}
rk[10] = rk[ 4] ^ rk[ 9];
rk[11] = rk[ 5] ^ rk[10];
rk += 6;
}
}
rk[6] = GETU32(userKey + 24);
rk[7] = GETU32(userKey + 28);
if (bits == 256) {
while (1) {
temp = rk[ 7];
rk[ 8] = rk[ 0] ^
(AES_Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(AES_Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(AES_Te4[(temp ) & 0xff] & 0x0000ff00) ^
(AES_Te4[(temp >> 24) ] & 0x000000ff) ^
rcon[i];
rk[ 9] = rk[ 1] ^ rk[ 8];
rk[10] = rk[ 2] ^ rk[ 9];
rk[11] = rk[ 3] ^ rk[10];
if (++i == 7) {
return 0;
}
temp = rk[11];
rk[12] = rk[ 4] ^
(AES_Te4[(temp >> 24) ] & 0xff000000) ^
(AES_Te4[(temp >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(temp >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(temp ) & 0xff] & 0x000000ff);
rk[13] = rk[ 5] ^ rk[12];
rk[14] = rk[ 6] ^ rk[13];
rk[15] = rk[ 7] ^ rk[14];
rk += 8;
}
}
abort();
}
/**
* Expand the cipher key into the decryption key schedule.
*/
int AES_set_decrypt_key(const unsigned char *userKey, const int bits,
AES_KEY *key) {
u32 *rk;
int i, j, status;
u32 temp;
/* first, start with an encryption schedule */
status = AES_set_encrypt_key(userKey, bits, key);
if (status < 0)
return status;
rk = key->rd_key;
/* invert the order of the round keys: */
for (i = 0, j = 4*(key->rounds); i < j; i += 4, j -= 4) {
temp = rk[i ]; rk[i ] = rk[j ]; rk[j ] = temp;
temp = rk[i + 1]; rk[i + 1] = rk[j + 1]; rk[j + 1] = temp;
temp = rk[i + 2]; rk[i + 2] = rk[j + 2]; rk[j + 2] = temp;
temp = rk[i + 3]; rk[i + 3] = rk[j + 3]; rk[j + 3] = temp;
}
/* apply the inverse MixColumn transform to all round keys but the first and the last: */
for (i = 1; i < (key->rounds); i++) {
rk += 4;
rk[0] =
AES_Td0[AES_Te4[(rk[0] >> 24) ] & 0xff] ^
AES_Td1[AES_Te4[(rk[0] >> 16) & 0xff] & 0xff] ^
AES_Td2[AES_Te4[(rk[0] >> 8) & 0xff] & 0xff] ^
AES_Td3[AES_Te4[(rk[0] ) & 0xff] & 0xff];
rk[1] =
AES_Td0[AES_Te4[(rk[1] >> 24) ] & 0xff] ^
AES_Td1[AES_Te4[(rk[1] >> 16) & 0xff] & 0xff] ^
AES_Td2[AES_Te4[(rk[1] >> 8) & 0xff] & 0xff] ^
AES_Td3[AES_Te4[(rk[1] ) & 0xff] & 0xff];
rk[2] =
AES_Td0[AES_Te4[(rk[2] >> 24) ] & 0xff] ^
AES_Td1[AES_Te4[(rk[2] >> 16) & 0xff] & 0xff] ^
AES_Td2[AES_Te4[(rk[2] >> 8) & 0xff] & 0xff] ^
AES_Td3[AES_Te4[(rk[2] ) & 0xff] & 0xff];
rk[3] =
AES_Td0[AES_Te4[(rk[3] >> 24) ] & 0xff] ^
AES_Td1[AES_Te4[(rk[3] >> 16) & 0xff] & 0xff] ^
AES_Td2[AES_Te4[(rk[3] >> 8) & 0xff] & 0xff] ^
AES_Td3[AES_Te4[(rk[3] ) & 0xff] & 0xff];
}
return 0;
}
#ifndef AES_ASM
/*
* Encrypt a single block
* in and out can overlap
*/
void AES_encrypt(const unsigned char *in, unsigned char *out,
const AES_KEY *key) {
const u32 *rk;
u32 s0, s1, s2, s3, t0, t1, t2, t3;
#ifndef FULL_UNROLL
int r;
#endif /* ?FULL_UNROLL */
assert(in && out && key);
rk = key->rd_key;
/*
* map byte array block to cipher state
* and add initial round key:
*/
s0 = GETU32(in ) ^ rk[0];
s1 = GETU32(in + 4) ^ rk[1];
s2 = GETU32(in + 8) ^ rk[2];
s3 = GETU32(in + 12) ^ rk[3];
#ifdef FULL_UNROLL
/* round 1: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[ 4];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[ 5];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[ 6];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[ 7];
/* round 2: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[ 8];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[ 9];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[10];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[11];
/* round 3: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[12];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[13];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[14];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[15];
/* round 4: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[16];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[17];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[18];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[19];
/* round 5: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[20];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[21];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[22];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[23];
/* round 6: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[24];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[25];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[26];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[27];
/* round 7: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[28];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[29];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[30];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[31];
/* round 8: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[32];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[33];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[34];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[35];
/* round 9: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[36];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[37];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[38];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[39];
if (key->rounds > 10) {
/* round 10: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[40];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[41];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[42];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[43];
/* round 11: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[44];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[45];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[46];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[47];
if (key->rounds > 12) {
/* round 12: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[48];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[49];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[50];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[51];
/* round 13: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[52];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[53];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[54];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[55];
}
}
rk += key->rounds << 2;
#else /* !FULL_UNROLL */
/*
* Nr - 1 full rounds:
*/
r = key->rounds >> 1;
for (;;) {
t0 =
AES_Te0[(s0 >> 24) ] ^
AES_Te1[(s1 >> 16) & 0xff] ^
AES_Te2[(s2 >> 8) & 0xff] ^
AES_Te3[(s3 ) & 0xff] ^
rk[4];
t1 =
AES_Te0[(s1 >> 24) ] ^
AES_Te1[(s2 >> 16) & 0xff] ^
AES_Te2[(s3 >> 8) & 0xff] ^
AES_Te3[(s0 ) & 0xff] ^
rk[5];
t2 =
AES_Te0[(s2 >> 24) ] ^
AES_Te1[(s3 >> 16) & 0xff] ^
AES_Te2[(s0 >> 8) & 0xff] ^
AES_Te3[(s1 ) & 0xff] ^
rk[6];
t3 =
AES_Te0[(s3 >> 24) ] ^
AES_Te1[(s0 >> 16) & 0xff] ^
AES_Te2[(s1 >> 8) & 0xff] ^
AES_Te3[(s2 ) & 0xff] ^
rk[7];
rk += 8;
if (--r == 0) {
break;
}
s0 =
AES_Te0[(t0 >> 24) ] ^
AES_Te1[(t1 >> 16) & 0xff] ^
AES_Te2[(t2 >> 8) & 0xff] ^
AES_Te3[(t3 ) & 0xff] ^
rk[0];
s1 =
AES_Te0[(t1 >> 24) ] ^
AES_Te1[(t2 >> 16) & 0xff] ^
AES_Te2[(t3 >> 8) & 0xff] ^
AES_Te3[(t0 ) & 0xff] ^
rk[1];
s2 =
AES_Te0[(t2 >> 24) ] ^
AES_Te1[(t3 >> 16) & 0xff] ^
AES_Te2[(t0 >> 8) & 0xff] ^
AES_Te3[(t1 ) & 0xff] ^
rk[2];
s3 =
AES_Te0[(t3 >> 24) ] ^
AES_Te1[(t0 >> 16) & 0xff] ^
AES_Te2[(t1 >> 8) & 0xff] ^
AES_Te3[(t2 ) & 0xff] ^
rk[3];
}
#endif /* ?FULL_UNROLL */
/*
* apply last round and
* map cipher state to byte array block:
*/
s0 =
(AES_Te4[(t0 >> 24) ] & 0xff000000) ^
(AES_Te4[(t1 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(t2 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(t3 ) & 0xff] & 0x000000ff) ^
rk[0];
PUTU32(out , s0);
s1 =
(AES_Te4[(t1 >> 24) ] & 0xff000000) ^
(AES_Te4[(t2 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(t3 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(t0 ) & 0xff] & 0x000000ff) ^
rk[1];
PUTU32(out + 4, s1);
s2 =
(AES_Te4[(t2 >> 24) ] & 0xff000000) ^
(AES_Te4[(t3 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(t0 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(t1 ) & 0xff] & 0x000000ff) ^
rk[2];
PUTU32(out + 8, s2);
s3 =
(AES_Te4[(t3 >> 24) ] & 0xff000000) ^
(AES_Te4[(t0 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(t1 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(t2 ) & 0xff] & 0x000000ff) ^
rk[3];
PUTU32(out + 12, s3);
}
/*
* Decrypt a single block
* in and out can overlap
*/
void AES_decrypt(const unsigned char *in, unsigned char *out,
const AES_KEY *key) {
const u32 *rk;
u32 s0, s1, s2, s3, t0, t1, t2, t3;
#ifndef FULL_UNROLL
int r;
#endif /* ?FULL_UNROLL */
assert(in && out && key);
rk = key->rd_key;
/*
* map byte array block to cipher state
* and add initial round key:
*/
s0 = GETU32(in ) ^ rk[0];
s1 = GETU32(in + 4) ^ rk[1];
s2 = GETU32(in + 8) ^ rk[2];
s3 = GETU32(in + 12) ^ rk[3];
#ifdef FULL_UNROLL
/* round 1: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[ 4];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[ 5];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[ 6];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[ 7];
/* round 2: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[ 8];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[ 9];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[10];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[11];
/* round 3: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[12];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[13];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[14];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[15];
/* round 4: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[16];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[17];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[18];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[19];
/* round 5: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[20];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[21];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[22];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[23];
/* round 6: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[24];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[25];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[26];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[27];
/* round 7: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[28];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[29];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[30];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[31];
/* round 8: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[32];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[33];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[34];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[35];
/* round 9: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[36];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[37];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[38];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[39];
if (key->rounds > 10) {
/* round 10: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[40];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[41];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[42];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[43];
/* round 11: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[44];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[45];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[46];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[47];
if (key->rounds > 12) {
/* round 12: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[48];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[49];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[50];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[51];
/* round 13: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[52];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[53];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[54];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[55];
}
}
rk += key->rounds << 2;
#else /* !FULL_UNROLL */
/*
* Nr - 1 full rounds:
*/
r = key->rounds >> 1;
for (;;) {
t0 =
AES_Td0[(s0 >> 24) ] ^
AES_Td1[(s3 >> 16) & 0xff] ^
AES_Td2[(s2 >> 8) & 0xff] ^
AES_Td3[(s1 ) & 0xff] ^
rk[4];
t1 =
AES_Td0[(s1 >> 24) ] ^
AES_Td1[(s0 >> 16) & 0xff] ^
AES_Td2[(s3 >> 8) & 0xff] ^
AES_Td3[(s2 ) & 0xff] ^
rk[5];
t2 =
AES_Td0[(s2 >> 24) ] ^
AES_Td1[(s1 >> 16) & 0xff] ^
AES_Td2[(s0 >> 8) & 0xff] ^
AES_Td3[(s3 ) & 0xff] ^
rk[6];
t3 =
AES_Td0[(s3 >> 24) ] ^
AES_Td1[(s2 >> 16) & 0xff] ^
AES_Td2[(s1 >> 8) & 0xff] ^
AES_Td3[(s0 ) & 0xff] ^
rk[7];
rk += 8;
if (--r == 0) {
break;
}
s0 =
AES_Td0[(t0 >> 24) ] ^
AES_Td1[(t3 >> 16) & 0xff] ^
AES_Td2[(t2 >> 8) & 0xff] ^
AES_Td3[(t1 ) & 0xff] ^
rk[0];
s1 =
AES_Td0[(t1 >> 24) ] ^
AES_Td1[(t0 >> 16) & 0xff] ^
AES_Td2[(t3 >> 8) & 0xff] ^
AES_Td3[(t2 ) & 0xff] ^
rk[1];
s2 =
AES_Td0[(t2 >> 24) ] ^
AES_Td1[(t1 >> 16) & 0xff] ^
AES_Td2[(t0 >> 8) & 0xff] ^
AES_Td3[(t3 ) & 0xff] ^
rk[2];
s3 =
AES_Td0[(t3 >> 24) ] ^
AES_Td1[(t2 >> 16) & 0xff] ^
AES_Td2[(t1 >> 8) & 0xff] ^
AES_Td3[(t0 ) & 0xff] ^
rk[3];
}
#endif /* ?FULL_UNROLL */
/*
* apply last round and
* map cipher state to byte array block:
*/
s0 =
(AES_Td4[(t0 >> 24) ] & 0xff000000) ^
(AES_Td4[(t3 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Td4[(t2 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Td4[(t1 ) & 0xff] & 0x000000ff) ^
rk[0];
PUTU32(out , s0);
s1 =
(AES_Td4[(t1 >> 24) ] & 0xff000000) ^
(AES_Td4[(t0 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Td4[(t3 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Td4[(t2 ) & 0xff] & 0x000000ff) ^
rk[1];
PUTU32(out + 4, s1);
s2 =
(AES_Td4[(t2 >> 24) ] & 0xff000000) ^
(AES_Td4[(t1 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Td4[(t0 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Td4[(t3 ) & 0xff] & 0x000000ff) ^
rk[2];
PUTU32(out + 8, s2);
s3 =
(AES_Td4[(t3 >> 24) ] & 0xff000000) ^
(AES_Td4[(t2 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Td4[(t1 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Td4[(t0 ) & 0xff] & 0x000000ff) ^
rk[3];
PUTU32(out + 12, s3);
}
#endif /* AES_ASM */
void AES_cbc_encrypt(const unsigned char *in, unsigned char *out,
const unsigned long length, const AES_KEY *key,
unsigned char *ivec, const int enc)
{
unsigned long n;
unsigned long len = length;
unsigned char tmp[AES_BLOCK_SIZE];
assert(in && out && key && ivec);
if (enc) {
while (len >= AES_BLOCK_SIZE) {
for(n=0; n < AES_BLOCK_SIZE; ++n)
tmp[n] = in[n] ^ ivec[n];
AES_encrypt(tmp, out, key);
memcpy(ivec, out, AES_BLOCK_SIZE);
len -= AES_BLOCK_SIZE;
in += AES_BLOCK_SIZE;
out += AES_BLOCK_SIZE;
}
if (len) {
for(n=0; n < len; ++n)
tmp[n] = in[n] ^ ivec[n];
for(n=len; n < AES_BLOCK_SIZE; ++n)
tmp[n] = ivec[n];
AES_encrypt(tmp, tmp, key);
memcpy(out, tmp, AES_BLOCK_SIZE);
memcpy(ivec, tmp, AES_BLOCK_SIZE);
}
} else {
while (len >= AES_BLOCK_SIZE) {
memcpy(tmp, in, AES_BLOCK_SIZE);
AES_decrypt(in, out, key);
for(n=0; n < AES_BLOCK_SIZE; ++n)
out[n] ^= ivec[n];
memcpy(ivec, tmp, AES_BLOCK_SIZE);
len -= AES_BLOCK_SIZE;
in += AES_BLOCK_SIZE;
out += AES_BLOCK_SIZE;
}
if (len) {
memcpy(tmp, in, AES_BLOCK_SIZE);
AES_decrypt(tmp, tmp, key);
for(n=0; n < len; ++n)
out[n] = tmp[n] ^ ivec[n];
memcpy(ivec, tmp, AES_BLOCK_SIZE);
}
}
}

94
qemu/crypto/init.c Normal file
View File

@ -0,0 +1,94 @@
/*
* QEMU Crypto initialization
*
* Copyright (c) 2015 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu/osdep.h"
#include "crypto/init.h"
#include "qapi/error.h"
#include "qemu/thread.h"
#ifdef CONFIG_GNUTLS
#include <gnutls/gnutls.h>
#include <gnutls/crypto.h>
#endif
#ifdef CONFIG_GCRYPT
#include <gcrypt.h>
#endif
#include "crypto/random.h"
/* #define DEBUG_GNUTLS */
/*
* We need to init gcrypt threading if
*
* - gcrypt < 1.6.0
*
*/
#if (defined(CONFIG_GCRYPT) && \
(GCRYPT_VERSION_NUMBER < 0x010600))
#define QCRYPTO_INIT_GCRYPT_THREADS
#else
#undef QCRYPTO_INIT_GCRYPT_THREADS
#endif
#ifdef DEBUG_GNUTLS
static void qcrypto_gnutls_log(int level, const char *str)
{
fprintf(stderr, "%d: %s", level, str);
}
#endif
int qcrypto_init(void)
{
#ifdef QCRYPTO_INIT_GCRYPT_THREADS
gcry_control(GCRYCTL_SET_THREAD_CBS, &qcrypto_gcrypt_thread_impl);
#endif /* QCRYPTO_INIT_GCRYPT_THREADS */
#ifdef CONFIG_GNUTLS
int ret;
ret = gnutls_global_init();
if (ret < 0) {
// error_setg(errp,
// "Unable to initialize GNUTLS library: %s",
// gnutls_strerror(ret));
return -1;
}
#ifdef DEBUG_GNUTLS
gnutls_global_set_log_level(10);
gnutls_global_set_log_function(qcrypto_gnutls_log);
#endif
#endif
#ifdef CONFIG_GCRYPT
if (!gcry_check_version(GCRYPT_VERSION)) {
// error_setg(errp, "Unable to initialize gcrypt");
return -1;
}
gcry_control(GCRYCTL_INITIALIZATION_FINISHED, 0);
#endif
if (qcrypto_random_init() < 0) {
return -1;
}
return 0;
}

View File

@ -1,3 +0,0 @@
# Default configuration for x86_64-softmmu
CONFIG_APIC=y

View File

@ -1,244 +0,0 @@
The memory API
==============
The memory API models the memory and I/O buses and controllers of a QEMU
machine. It attempts to allow modelling of:
- ordinary RAM
- memory-mapped I/O (MMIO)
- memory controllers that can dynamically reroute physical memory regions
to different destinations
The memory model provides support for
- tracking RAM changes by the guest
- setting up coalesced memory for kvm
- setting up ioeventfd regions for kvm
Memory is modelled as an acyclic graph of MemoryRegion objects. Sinks
(leaves) are RAM and MMIO regions, while other nodes represent
buses, memory controllers, and memory regions that have been rerouted.
In addition to MemoryRegion objects, the memory API provides AddressSpace
objects for every root and possibly for intermediate MemoryRegions too.
These represent memory as seen from the CPU or a device's viewpoint.
Types of regions
----------------
There are four types of memory regions (all represented by a single C type
MemoryRegion):
- RAM: a RAM region is simply a range of host memory that can be made available
to the guest.
- MMIO: a range of guest memory that is implemented by host callbacks;
each read or write causes a callback to be called on the host.
- container: a container simply includes other memory regions, each at
a different offset. Containers are useful for grouping several regions
into one unit. For example, a PCI BAR may be composed of a RAM region
and an MMIO region.
A container's subregions are usually non-overlapping. In some cases it is
useful to have overlapping regions; for example a memory controller that
can overlay a subregion of RAM with MMIO or ROM, or a PCI controller
that does not prevent card from claiming overlapping BARs.
- alias: a subsection of another region. Aliases allow a region to be
split apart into discontiguous regions. Examples of uses are memory banks
used when the guest address space is smaller than the amount of RAM
addressed, or a memory controller that splits main memory to expose a "PCI
hole". Aliases may point to any type of region, including other aliases,
but an alias may not point back to itself, directly or indirectly.
It is valid to add subregions to a region which is not a pure container
(that is, to an MMIO, RAM or ROM region). This means that the region
will act like a container, except that any addresses within the container's
region which are not claimed by any subregion are handled by the
container itself (ie by its MMIO callbacks or RAM backing). However
it is generally possible to achieve the same effect with a pure container
one of whose subregions is a low priority "background" region covering
the whole address range; this is often clearer and is preferred.
Subregions cannot be added to an alias region.
Region names
------------
Regions are assigned names by the constructor. For most regions these are
only used for debugging purposes, but RAM regions also use the name to identify
live migration sections. This means that RAM region names need to have ABI
stability.
Region lifecycle
----------------
A region is created by one of the constructor functions (memory_region_init*())
and attached to an object. It is then destroyed by object_unparent() or simply
when the parent object dies.
In between, a region can be added to an address space
by using memory_region_add_subregion() and removed using
memory_region_del_subregion(). Destroying the region implicitly
removes the region from the address space.
Region attributes may be changed at any point; they take effect once
the region becomes exposed to the guest.
Overlapping regions and priority
--------------------------------
Usually, regions may not overlap each other; a memory address decodes into
exactly one target. In some cases it is useful to allow regions to overlap,
and sometimes to control which of an overlapping regions is visible to the
guest. This is done with memory_region_add_subregion_overlap(), which
allows the region to overlap any other region in the same container, and
specifies a priority that allows the core to decide which of two regions at
the same address are visible (highest wins).
Priority values are signed, and the default value is zero. This means that
you can use memory_region_add_subregion_overlap() both to specify a region
that must sit 'above' any others (with a positive priority) and also a
background region that sits 'below' others (with a negative priority).
If the higher priority region in an overlap is a container or alias, then
the lower priority region will appear in any "holes" that the higher priority
region has left by not mapping subregions to that area of its address range.
(This applies recursively -- if the subregions are themselves containers or
aliases that leave holes then the lower priority region will appear in these
holes too.)
For example, suppose we have a container A of size 0x8000 with two subregions
B and C. B is a container mapped at 0x2000, size 0x4000, priority 1; C is
an MMIO region mapped at 0x0, size 0x6000, priority 2. B currently has two
of its own subregions: D of size 0x1000 at offset 0 and E of size 0x1000 at
offset 0x2000. As a diagram:
0 1000 2000 3000 4000 5000 6000 7000 8000
|------|------|------|------|------|------|------|-------|
A: [ ]
C: [CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC]
B: [ ]
D: [DDDDD]
E: [EEEEE]
The regions that will be seen within this address range then are:
[CCCCCCCCCCCC][DDDDD][CCCCC][EEEEE][CCCCC]
Since B has higher priority than C, its subregions appear in the flat map
even where they overlap with C. In ranges where B has not mapped anything
C's region appears.
If B had provided its own MMIO operations (ie it was not a pure container)
then these would be used for any addresses in its range not handled by
D or E, and the result would be:
[CCCCCCCCCCCC][DDDDD][BBBBB][EEEEE][BBBBB]
Priority values are local to a container, because the priorities of two
regions are only compared when they are both children of the same container.
This means that the device in charge of the container (typically modelling
a bus or a memory controller) can use them to manage the interaction of
its child regions without any side effects on other parts of the system.
In the example above, the priorities of D and E are unimportant because
they do not overlap each other. It is the relative priority of B and C
that causes D and E to appear on top of C: D and E's priorities are never
compared against the priority of C.
Visibility
----------
The memory core uses the following rules to select a memory region when the
guest accesses an address:
- all direct subregions of the root region are matched against the address, in
descending priority order
- if the address lies outside the region offset/size, the subregion is
discarded
- if the subregion is a leaf (RAM or MMIO), the search terminates, returning
this leaf region
- if the subregion is a container, the same algorithm is used within the
subregion (after the address is adjusted by the subregion offset)
- if the subregion is an alias, the search is continued at the alias target
(after the address is adjusted by the subregion offset and alias offset)
- if a recursive search within a container or alias subregion does not
find a match (because of a "hole" in the container's coverage of its
address range), then if this is a container with its own MMIO or RAM
backing the search terminates, returning the container itself. Otherwise
we continue with the next subregion in priority order
- if none of the subregions match the address then the search terminates
with no match found
Example memory map
------------------
system_memory: container@0-2^48-1
|
+---- lomem: alias@0-0xdfffffff ---> #ram (0-0xdfffffff)
|
+---- himem: alias@0x100000000-0x11fffffff ---> #ram (0xe0000000-0xffffffff)
|
+---- vga-window: alias@0xa0000-0xbfffff ---> #pci (0xa0000-0xbffff)
| (prio 1)
|
+---- pci-hole: alias@0xe0000000-0xffffffff ---> #pci (0xe0000000-0xffffffff)
pci (0-2^32-1)
|
+--- vga-area: container@0xa0000-0xbffff
| |
| +--- alias@0x00000-0x7fff ---> #vram (0x010000-0x017fff)
| |
| +--- alias@0x08000-0xffff ---> #vram (0x020000-0x027fff)
|
+---- vram: ram@0xe1000000-0xe1ffffff
|
+---- vga-mmio: mmio@0xe2000000-0xe200ffff
ram: ram@0x00000000-0xffffffff
This is a (simplified) PC memory map. The 4GB RAM block is mapped into the
system address space via two aliases: "lomem" is a 1:1 mapping of the first
3.5GB; "himem" maps the last 0.5GB at address 4GB. This leaves 0.5GB for the
so-called PCI hole, that allows a 32-bit PCI bus to exist in a system with
4GB of memory.
The memory controller diverts addresses in the range 640K-768K to the PCI
address space. This is modelled using the "vga-window" alias, mapped at a
higher priority so it obscures the RAM at the same addresses. The vga window
can be removed by programming the memory controller; this is modelled by
removing the alias and exposing the RAM underneath.
The pci address space is not a direct child of the system address space, since
we only want parts of it to be visible (we accomplish this using aliases).
It has two subregions: vga-area models the legacy vga window and is occupied
by two 32K memory banks pointing at two sections of the framebuffer.
In addition the vram is mapped as a BAR at address e1000000, and an additional
BAR containing MMIO registers is mapped after it.
Note that if the guest maps a BAR outside the PCI hole, it would not be
visible as the pci-hole alias clips it to a 0.5GB range.
Attributes
----------
Various region attributes (read-only, dirty logging, coalesced mmio, ioeventfd)
can be changed during the region lifecycle. They take effect once the region
is made visible (which can be immediately, later, or never).
MMIO Operations
---------------
MMIO regions are provided with ->read() and ->write() callbacks; in addition
various constraints can be supplied to control how these callbacks are called:
- .valid.min_access_size, .valid.max_access_size define the access sizes
(in bytes) which the device accepts; accesses outside this range will
have device and bus specific behaviour (ignored, or machine check)
- .valid.aligned specifies that the device only accepts naturally aligned
accesses. Unaligned accesses invoke device and bus specific behaviour.
- .impl.min_access_size, .impl.max_access_size define the access sizes
(in bytes) supported by the *implementation*; other access sizes will be
emulated using the ones available. For example a 4-byte write will be
emulated using four 1-byte writes, if .impl.max_access_size = 1.
- .impl.unaligned specifies that the *implementation* supports unaligned
accesses; if false, unaligned accesses will be emulated by two aligned
accesses.
- .old_mmio can be used to ease porting from code using
cpu_register_io_memory(). It should not be used in new code.

84
qemu/exec-vary.c Normal file
View File

@ -0,0 +1,84 @@
/*
* Variable page size handling
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu-common.h"
#define IN_EXEC_VARY 1
#include "exec/exec-all.h"
#include <uc_priv.h>
bool set_preferred_target_page_bits(struct uc_struct *uc, int bits)
{
/*
* The target page size is the lowest common denominator for all
* the CPUs in the system, so we can only make it smaller, never
* larger. And we can't make it smaller once we've committed to
* a particular size.
*/
#ifdef TARGET_PAGE_BITS_VARY
//assert(bits >= TARGET_PAGE_BITS_MIN);
if (uc->init_target_page == NULL) {
uc->init_target_page = calloc(1, sizeof(TargetPageBits));
} else {
return false;
}
if (bits < TARGET_PAGE_BITS_MIN) {
return false;
}
if (uc->init_target_page->bits == 0 || uc->init_target_page->bits > bits) {
if (uc->init_target_page->decided) {
return false;
}
uc->init_target_page->bits = bits;
}
#endif
return true;
}
void finalize_target_page_bits(struct uc_struct *uc)
{
#ifdef TARGET_PAGE_BITS_VARY
if (uc->init_target_page == NULL) {
uc->init_target_page = calloc(1, sizeof(TargetPageBits));
} else {
return;
}
if (uc->target_bits != 0) {
uc->init_target_page->bits = uc->target_bits;
}
if (uc->init_target_page->bits == 0) {
uc->init_target_page->bits = TARGET_PAGE_BITS_MIN;
}
uc->init_target_page->mask = (target_long)-1 << uc->init_target_page->bits;
uc->init_target_page->decided = true;
/*
* For the benefit of an -flto build, prevent the compiler from
* hoisting a read from target_page before we finish initializing.
*/
barrier();
#endif
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +0,0 @@
#!/bin/sh
for d in x86_64 arm armeb m68k aarch64 aarch64eb mips mipsel mips64 mips64el sparc sparc64; do
python header_gen.py $d > $d.h
done

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +0,0 @@
devices-dirs-$(CONFIG_SOFTMMU) += intc/
devices-dirs-y += core/
common-obj-y += $(devices-dirs-y)
obj-y += $(devices-dirs-y)

View File

@ -1,2 +0,0 @@
obj-y += tosa.o
obj-y += virt.o

View File

@ -1,45 +0,0 @@
/* vim:set shiftwidth=4 ts=4 et: */
/*
* PXA255 Sharp Zaurus SL-6000 PDA platform
*
* Copyright (c) 2008 Dmitry Baryshkov
*
* Code based on spitz platform by Andrzej Zaborowski <balrog@zabor.org>
* This code is licensed under the GNU GPL v2.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include "hw/hw.h"
#include "hw/arm/arm.h"
#include "hw/boards.h"
#include "exec/address-spaces.h"
static int tosa_init(struct uc_struct *uc, MachineState *machine)
{
if (uc->mode & UC_MODE_MCLASS)
uc->cpu = (CPUState *)cpu_arm_init(uc, "cortex-m3");
else if (uc->mode & UC_MODE_ARM926)
uc->cpu = (CPUState *)cpu_arm_init(uc, "arm926");
else if (uc->mode & UC_MODE_ARM946)
uc->cpu = (CPUState *)cpu_arm_init(uc, "arm946");
else if (uc->mode & UC_MODE_ARM1176)
uc->cpu = (CPUState *)cpu_arm_init(uc, "arm1176");
else
uc->cpu = (CPUState *)cpu_arm_init(uc, "cortex-a15");
return 0;
}
void tosa_machine_init(struct uc_struct *uc)
{
static QEMUMachine tosapda_machine = { 0 };
tosapda_machine.name = "tosa",
tosapda_machine.init = tosa_init,
tosapda_machine.is_default = 1,
tosapda_machine.arch = UC_ARCH_ARM,
qemu_register_machine(uc, &tosapda_machine, TYPE_MACHINE, NULL);
}

View File

@ -1,74 +0,0 @@
/*
* ARM mach-virt emulation
*
* Copyright (c) 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2 or later, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*
* Emulate a virtual board which works by passing Linux all the information
* it needs about what devices are present via the device tree.
* There are some restrictions about what we can do here:
* + we can only present devices whose Linux drivers will work based
* purely on the device tree with no platform data at all
* + we want to present a very stripped-down minimalist platform,
* both because this reduces the security attack surface from the guest
* and also because it reduces our exposure to being broken when
* the kernel updates its device tree bindings and requires further
* information in a device binding that we aren't providing.
* This is essentially the same approach kvmtool uses.
*/
/* Unicorn Emulator Engine */
/* By Nguyen Anh Quynh, 2015 */
#include "hw/arm/arm.h"
#include "hw/boards.h"
#include "exec/address-spaces.h"
static int machvirt_init(struct uc_struct *uc, MachineState *machine)
{
const char *cpu_model = machine->cpu_model;
int n;
if (!cpu_model) {
cpu_model = "cortex-a57"; // ARM64
}
for (n = 0; n < smp_cpus; n++) {
Object *cpuobj;
ObjectClass *oc = cpu_class_by_name(uc, TYPE_ARM_CPU, cpu_model);
if (!oc) {
fprintf(stderr, "Unable to find CPU definition\n");
return -1;
}
cpuobj = object_new(uc, object_class_get_name(oc));
uc->cpu = (CPUState *)cpuobj;
object_property_set_bool(uc, cpuobj, true, "realized", NULL);
}
return 0;
}
void machvirt_machine_init(struct uc_struct *uc)
{
static QEMUMachine machvirt_a15_machine = { 0 };
machvirt_a15_machine.name = "virt",
machvirt_a15_machine.init = machvirt_init,
machvirt_a15_machine.is_default = 1,
machvirt_a15_machine.arch = UC_ARCH_ARM64,
qemu_register_machine(uc, &machvirt_a15_machine, TYPE_MACHINE, NULL);
}

View File

@ -1,3 +0,0 @@
# core qdev-related obj files, also used by *-user:
common-obj-y += qdev.o
common-obj-$(CONFIG_SOFTMMU) += machine.o

154
qemu/hw/core/cpu.c Normal file
View File

@ -0,0 +1,154 @@
/*
* QEMU CPU model
*
* Copyright (c) 2012-2014 SUSE LINUX Products GmbH
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see
* <http://www.gnu.org/licenses/gpl-2.0.html>
*/
#include "uc_priv.h"
#include "qemu/osdep.h"
#include "hw/core/cpu.h"
#include "sysemu/tcg.h"
bool cpu_paging_enabled(const CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
return cc->get_paging_enabled(cpu);
}
static bool cpu_common_get_paging_enabled(const CPUState *cpu)
{
return false;
}
void cpu_get_memory_mapping(CPUState *cpu, MemoryMappingList *list)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
cc->get_memory_mapping(cpu, list);
}
static void cpu_common_get_memory_mapping(CPUState *cpu,
MemoryMappingList *list)
{
// error_setg(errp, "Obtaining memory mappings is unsupported on this CPU.");
}
/* Resetting the IRQ comes from across the code base so we take the
* BQL here if we need to. cpu_interrupt assumes it is held.*/
void cpu_reset_interrupt(CPUState *cpu, int mask)
{
cpu->interrupt_request &= ~mask;
}
void cpu_exit(CPUState *cpu)
{
cpu->exit_request = 1;
cpu->tcg_exit_req = 1;
cpu->icount_decr_ptr->u16.high = -1;
}
static void cpu_common_noop(CPUState *cpu)
{
}
static bool cpu_common_exec_interrupt(CPUState *cpu, int int_req)
{
return false;
}
void cpu_reset(CPUState *cpu)
{
CPUClass *klass = CPU_GET_CLASS(cpu);
if (klass->reset != NULL) {
(*klass->reset)(cpu);
}
}
static void cpu_common_reset(CPUState *dev)
{
CPUState *cpu = CPU(dev);
cpu->interrupt_request = 0;
cpu->halted = 0;
cpu->mem_io_pc = 0;
cpu->icount_extra = 0;
cpu->can_do_io = 1;
cpu->exception_index = -1;
cpu->crash_occurred = false;
cpu->cflags_next_tb = -1;
cpu_tb_jmp_cache_clear(cpu);
cpu->uc->tcg_flush_tlb(cpu->uc);
}
static bool cpu_common_has_work(CPUState *cs)
{
return false;
}
static int64_t cpu_common_get_arch_id(CPUState *cpu)
{
return cpu->cpu_index;
}
void cpu_class_init(struct uc_struct *uc, CPUClass *k)
{
k->get_arch_id = cpu_common_get_arch_id;
k->has_work = cpu_common_has_work;
k->get_paging_enabled = cpu_common_get_paging_enabled;
k->get_memory_mapping = cpu_common_get_memory_mapping;
k->debug_excp_handler = cpu_common_noop;
k->cpu_exec_enter = cpu_common_noop;
k->cpu_exec_exit = cpu_common_noop;
k->cpu_exec_interrupt = cpu_common_exec_interrupt;
/* instead of dc->reset. */
k->reset = cpu_common_reset;
return;
}
void cpu_common_initfn(struct uc_struct *uc, CPUState *cs)
{
CPUState *cpu = CPU(cs);
cpu->cpu_index = UNASSIGNED_CPU_INDEX;
cpu->cluster_index = UNASSIGNED_CLUSTER_INDEX;
/* *-user doesn't have configurable SMP topology */
/* the default value is changed by qemu_init_vcpu() for softmmu */
cpu->nr_cores = 1;
cpu->nr_threads = 1;
QTAILQ_INIT(&cpu->breakpoints);
QTAILQ_INIT(&cpu->watchpoints);
/* cpu_exec_initfn(cpu); */
cpu->num_ases = 1;
cpu->as = &(cpu->uc->address_space_memory);
cpu->memory = cpu->uc->system_memory;
}
void cpu_stop(struct uc_struct *uc)
{
if (uc->cpu) {
uc->cpu->stop = false;
uc->cpu->stopped = true;
cpu_exit(uc->cpu);
}
}

View File

@ -1,47 +0,0 @@
/*
* QEMU Machine
*
* Copyright (C) 2014 Red Hat Inc
*
* Authors:
* Marcel Apfelbaum <marcel.a@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*/
#include "hw/boards.h"
static void machine_initfn(struct uc_struct *uc, Object *obj, void *opaque)
{
}
static void machine_finalize(struct uc_struct *uc, Object *obj, void *opaque)
{
}
static const TypeInfo machine_info = {
TYPE_MACHINE,
TYPE_OBJECT,
sizeof(MachineClass),
sizeof(MachineState),
NULL,
machine_initfn,
NULL,
machine_finalize,
NULL,
NULL,
NULL,
NULL,
true,
};
void machine_register_types(struct uc_struct *uc)
{
type_register_static(uc, &machine_info);
}

View File

@ -1,344 +0,0 @@
/*
* Dynamic device configuration and creation.
*
* Copyright (c) 2009 CodeSourcery
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
/* The theory here is that it should be possible to create a machine without
knowledge of specific devices. Historically board init routines have
passed a bunch of arguments to each device, requiring the board know
exactly which device it is dealing with. This file provides an abstract
API for device configuration and initialization. Devices will generally
inherit from a particular bus (e.g. PCI or I2C) rather than
this API directly. */
#include "hw/qdev.h"
#include "qapi/error.h"
#include "qapi/qmp/qerror.h"
static void bus_add_child(BusState *bus, DeviceState *child)
{
char name[32];
BusChild *kid = g_malloc0(sizeof(*kid));
kid->index = bus->max_index++;
kid->child = child;
object_ref(OBJECT(kid->child));
QTAILQ_INSERT_HEAD(&bus->children, kid, sibling);
/* This transfers ownership of kid->child to the property. */
snprintf(name, sizeof(name), "child[%d]", kid->index);
object_property_add_link(OBJECT(bus), name,
object_get_typename(OBJECT(child)),
(Object **)&kid->child,
NULL, /* read-only property */
0, /* return ownership on prop deletion */
NULL);
}
void qdev_set_parent_bus(DeviceState *dev, BusState *bus)
{
dev->parent_bus = bus;
object_ref(OBJECT(bus));
bus_add_child(bus, dev);
}
/* Create a new device. This only initializes the device state structure
and allows properties to be set. qdev_init should be called to
initialize the actual device emulation. */
DeviceState *qdev_create(BusState *bus, const char *name)
{
DeviceState *dev;
dev = qdev_try_create(bus, name);
if (!dev) {
abort();
}
return dev;
}
DeviceState *qdev_try_create(BusState *bus, const char *type)
{
#if 0
DeviceState *dev;
if (object_class_by_name(NULL, type) == NULL) { // no need to fix. aq
return NULL;
}
dev = DEVICE(object_new(NULL, type)); // no need to fix. aq
if (!dev) {
return NULL;
}
if (!bus) {
bus = sysbus_get_default();
}
qdev_set_parent_bus(dev, bus);
object_unref(OBJECT(dev));
return dev;
#endif
return NULL;
}
/* Initialize a device. Device properties should be set before calling
this function. IRQs and MMIO regions should be connected/mapped after
calling this function.
On failure, destroy the device and return negative value.
Return 0 on success. */
int qdev_init(DeviceState *dev)
{
return 0;
}
BusState *qdev_get_parent_bus(DeviceState *dev)
{
return dev->parent_bus;
}
static void qbus_realize(BusState *bus, DeviceState *parent, const char *name)
{
}
static void bus_unparent(struct uc_struct *uc, Object *obj)
{
BusState *bus = BUS(uc, obj);
BusChild *kid;
while ((kid = QTAILQ_FIRST(&bus->children)) != NULL) {
DeviceState *dev = kid->child;
object_unparent(uc, OBJECT(dev));
}
if (bus->parent) {
QLIST_REMOVE(bus, sibling);
bus->parent->num_child_bus--;
bus->parent = NULL;
}
}
void qbus_create_inplace(void *bus, size_t size, const char *typename,
DeviceState *parent, const char *name)
{
object_initialize(NULL, bus, size, typename); // unused, so no need to fix. aq
qbus_realize(bus, parent, name);
}
BusState *qbus_create(const char *typename, DeviceState *parent, const char *name)
{
BusState *bus;
bus = BUS(NULL, object_new(NULL, typename)); // no need to fix. aq
qbus_realize(bus, parent, name);
return bus;
}
static bool device_get_realized(struct uc_struct *uc, Object *obj, Error **errp)
{
DeviceState *dev = DEVICE(uc, obj);
return dev->realized;
}
static int device_set_realized(struct uc_struct *uc, Object *obj, bool value, Error **errp)
{
DeviceState *dev = DEVICE(uc, obj);
DeviceClass *dc = DEVICE_GET_CLASS(uc, dev);
BusState *bus;
Error *local_err = NULL;
if (dev->hotplugged && !dc->hotpluggable) {
error_set(errp, QERR_DEVICE_NO_HOTPLUG, object_get_typename(obj));
return -1;
}
if (value && !dev->realized) {
#if 0
if (!obj->parent) {
static int unattached_count;
gchar *name = g_strdup_printf("device[%d]", unattached_count++);
object_property_add_child(container_get(qdev_get_machine(),
"/unattached"),
name, obj, &error_abort);
g_free(name);
}
#endif
if (dc->realize) {
if (dc->realize(uc, dev, &local_err))
return -1;
}
if (local_err != NULL) {
goto fail;
}
if (local_err != NULL) {
goto post_realize_fail;
}
QLIST_FOREACH(bus, &dev->child_bus, sibling) {
object_property_set_bool(uc, OBJECT(bus), true, "realized",
&local_err);
if (local_err != NULL) {
goto child_realize_fail;
}
}
if (dev->hotplugged) {
device_reset(dev);
}
dev->pending_deleted_event = false;
} else if (!value && dev->realized) {
Error **local_errp = NULL;
QLIST_FOREACH(bus, &dev->child_bus, sibling) {
local_errp = local_err ? NULL : &local_err;
object_property_set_bool(uc, OBJECT(bus), false, "realized",
local_errp);
}
if (dc->unrealize) {
local_errp = local_err ? NULL : &local_err;
dc->unrealize(dev, local_errp);
}
dev->pending_deleted_event = true;
}
if (local_err != NULL) {
goto fail;
}
dev->realized = value;
return 0;
child_realize_fail:
QLIST_FOREACH(bus, &dev->child_bus, sibling) {
object_property_set_bool(uc, OBJECT(bus), false, "realized",
NULL);
}
post_realize_fail:
if (dc->unrealize) {
dc->unrealize(dev, NULL);
}
fail:
error_propagate(errp, local_err);
return -1;
}
static void device_initfn(struct uc_struct *uc, Object *obj, void *opaque)
{
DeviceState *dev = DEVICE(uc, obj);
dev->instance_id_alias = -1;
dev->realized = false;
object_property_add_bool(uc, obj, "realized",
device_get_realized, device_set_realized, NULL);
}
static void device_post_init(struct uc_struct *uc, Object *obj)
{
}
/* Unlink device from bus and free the structure. */
static void device_finalize(struct uc_struct *uc, Object *obj, void *opaque)
{
}
static void device_class_base_init(ObjectClass *class, void *data)
{
}
static void device_class_init(struct uc_struct *uc, ObjectClass *class, void *data)
{
}
void device_reset(DeviceState *dev)
{
}
Object *qdev_get_machine(struct uc_struct *uc)
{
return container_get(uc, object_get_root(uc), "/machine");
}
static const TypeInfo device_type_info = {
TYPE_DEVICE,
TYPE_OBJECT,
sizeof(DeviceClass),
sizeof(DeviceState),
NULL,
device_initfn,
device_post_init,
device_finalize,
NULL,
device_class_init,
device_class_base_init,
NULL,
true,
};
static void qbus_initfn(struct uc_struct *uc, Object *obj, void *opaque)
{
}
static void bus_class_init(struct uc_struct *uc, ObjectClass *class, void *data)
{
class->unparent = bus_unparent;
}
static void qbus_finalize(struct uc_struct *uc, Object *obj, void *opaque)
{
BusState *bus = BUS(uc, obj);
g_free((char *)bus->name);
}
static const TypeInfo bus_info = {
TYPE_BUS,
TYPE_OBJECT,
sizeof(BusClass),
sizeof(BusState),
NULL,
qbus_initfn,
NULL,
qbus_finalize,
NULL,
bus_class_init,
NULL,
NULL,
true,
};
void qdev_register_types(struct uc_struct *uc)
{
type_register_static(uc, &bus_info);
type_register_static(uc, &device_type_info);
}

View File

@ -1 +0,0 @@
obj-y += pc.o pc_piix.o

View File

@ -1,181 +0,0 @@
/*
* QEMU PC System Emulator
*
* Copyright (c) 2003-2004 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/* Modified for Unicorn Engine by Nguyen Anh Quynh, 2015 */
#include "hw/hw.h"
#include "hw/i386/pc.h"
#include "sysemu/sysemu.h"
#include "qapi-visit.h"
/* XXX: add IGNNE support */
void cpu_set_ferr(CPUX86State *s)
{
// qemu_irq_raise(ferr_irq);
}
/* TSC handling */
uint64_t cpu_get_tsc(CPUX86State *env)
{
return cpu_get_ticks();
}
/* SMM support */
static cpu_set_smm_t smm_set;
static void *smm_arg;
void cpu_smm_register(cpu_set_smm_t callback, void *arg)
{
assert(smm_set == NULL);
assert(smm_arg == NULL);
smm_set = callback;
smm_arg = arg;
}
void cpu_smm_update(CPUX86State *env)
{
struct uc_struct *uc = x86_env_get_cpu(env)->parent_obj.uc;
if (smm_set && smm_arg && CPU(x86_env_get_cpu(env)) == uc->cpu) {
smm_set(!!(env->hflags & HF_SMM_MASK), smm_arg);
}
}
/* IRQ handling */
int cpu_get_pic_interrupt(CPUX86State *env)
{
X86CPU *cpu = x86_env_get_cpu(env);
int intno;
intno = apic_get_interrupt(cpu->apic_state);
if (intno >= 0) {
return intno;
}
/* read the irq from the PIC */
if (!apic_accept_pic_intr(cpu->apic_state)) {
return -1;
}
return 0;
}
DeviceState *cpu_get_current_apic(struct uc_struct *uc)
{
if (uc->current_cpu) {
X86CPU *cpu = X86_CPU(uc, uc->current_cpu);
return cpu->apic_state;
} else {
return NULL;
}
}
static X86CPU *pc_new_cpu(struct uc_struct *uc, const char *cpu_model, int64_t apic_id,
Error **errp)
{
X86CPU *cpu;
Error *local_err = NULL;
cpu = cpu_x86_create(uc, cpu_model, &local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);
return NULL;
}
object_property_set_int(uc, OBJECT(cpu), apic_id, "apic-id", &local_err);
object_property_set_bool(uc, OBJECT(cpu), true, "realized", &local_err);
if (local_err) {
error_propagate(errp, local_err);
object_unref(uc, OBJECT(cpu));
cpu = NULL;
}
return cpu;
}
int pc_cpus_init(struct uc_struct *uc, const char *cpu_model)
{
int i;
Error *error = NULL;
/* init CPUs */
if (cpu_model == NULL) {
#ifdef TARGET_X86_64
cpu_model = "qemu64";
#else
cpu_model = "qemu32";
#endif
}
for (i = 0; i < smp_cpus; i++) {
uc->cpu = (CPUState *)pc_new_cpu(uc, cpu_model, x86_cpu_apic_id_from_index(i), &error);
if (error) {
//error_report("%s", error_get_pretty(error));
error_free(error);
return -1;
}
}
return 0;
}
static void pc_machine_initfn(struct uc_struct *uc, Object *obj, void *opaque)
{
}
static void pc_machine_class_init(struct uc_struct *uc, ObjectClass *oc, void *data)
{
}
static const TypeInfo pc_machine_info = {
TYPE_PC_MACHINE,
TYPE_MACHINE,
sizeof(PCMachineClass),
sizeof(PCMachineState),
NULL,
pc_machine_initfn,
NULL,
NULL,
NULL,
pc_machine_class_init,
NULL,
NULL,
true,
NULL,
NULL,
// should this be added somehow?
//.interfaces = (InterfaceInfo[]) { { } },
};
void pc_machine_register_types(struct uc_struct *uc)
{
type_register_static(uc, &pc_machine_info);
}

View File

@ -1,78 +0,0 @@
/*
* QEMU PC System Emulator
*
* Copyright (c) 2003-2004 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/* Modified for Unicorn Engine by Nguyen Anh Quynh, 2015 */
#include "hw/i386/pc.h"
#include "hw/boards.h"
#include "exec/address-spaces.h"
#include "uc_priv.h"
/* Make sure that guest addresses aligned at 1Gbyte boundaries get mapped to
* host addresses aligned at 1Gbyte boundaries. This way we can use 1GByte
* pages in the host.
*/
#define GIGABYTE_ALIGN true
/* PC hardware initialisation */
static int pc_init1(struct uc_struct *uc, MachineState *machine)
{
return pc_cpus_init(uc, machine->cpu_model);
}
static int pc_init_pci(struct uc_struct *uc, MachineState *machine)
{
return pc_init1(uc, machine);
}
static QEMUMachine pc_i440fx_machine_v2_2 = {
"pc_piix",
"pc-i440fx-2.2",
pc_init_pci,
NULL,
255,
1,
UC_ARCH_X86, // X86
};
static void pc_generic_machine_class_init(struct uc_struct *uc, ObjectClass *oc, void *data)
{
MachineClass *mc = MACHINE_CLASS(uc, oc);
QEMUMachine *qm = data;
mc->family = qm->family;
mc->name = qm->name;
mc->init = qm->init;
mc->reset = qm->reset;
mc->max_cpus = qm->max_cpus;
mc->is_default = qm->is_default;
mc->arch = qm->arch;
}
void pc_machine_init(struct uc_struct *uc);
void pc_machine_init(struct uc_struct *uc)
{
qemu_register_machine(uc, &pc_i440fx_machine_v2_2,
TYPE_PC_MACHINE, pc_generic_machine_class_init);
}

View File

@ -1,5 +1,7 @@
/*
* QEMU MIPS address translation support
* QEMU PC System Emulator
*
* Copyright (c) 2003-2004 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@ -19,21 +21,17 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/* Modified for Unicorn Engine by Nguyen Anh Quynh, 2015 */
/* Modified for Unicorn Engine by Chen Huitao<chenhuitao@hfmrit.com>, 2020 */
#include "hw/hw.h"
#include "hw/mips/cpudevs.h"
#include "qemu/compiler.h"
#include "sysemu/sysemu.h"
#include "target/i386/cpu.h"
uint64_t cpu_mips_kseg0_to_phys(void *opaque, uint64_t addr)
/* TSC handling */
uint64_t cpu_get_tsc(CPUX86State *env)
{
return addr & 0x1fffffffll;
return cpu_get_ticks();
}
uint64_t cpu_mips_phys_to_kseg0(void *opaque, uint64_t addr)
{
return addr | ~0x7fffffffll;
}
uint64_t cpu_mips_kvm_um_phys_to_kseg0(void *opaque, uint64_t addr)
{
return addr | 0x40000000ll;
}

View File

@ -1 +0,0 @@
obj-$(CONFIG_APIC) += apic.o apic_common.o

View File

@ -1,230 +0,0 @@
/*
* APIC support
*
* Copyright (c) 2004-2005 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>
*/
#include "qemu/thread.h"
#include "hw/i386/apic_internal.h"
#include "hw/i386/apic.h"
#include "qemu/host-utils.h"
#include "hw/i386/pc.h"
#include "exec/address-spaces.h"
#define MAX_APIC_WORDS 8
#define SYNC_FROM_VAPIC 0x1
#define SYNC_TO_VAPIC 0x2
#define SYNC_ISR_IRR_TO_VAPIC 0x4
static void apic_update_irq(APICCommonState *s);
/* Find first bit starting from msb */
static int apic_fls_bit(uint32_t value)
{
return 31 - clz32(value);
}
/* return -1 if no bit is set */
static int get_highest_priority_int(uint32_t *tab)
{
int i;
for (i = 7; i >= 0; i--) {
if (tab[i] != 0) {
return i * 32 + apic_fls_bit(tab[i]);
}
}
return -1;
}
static void apic_sync_vapic(APICCommonState *s, int sync_type)
{
VAPICState vapic_state;
//size_t length;
//off_t start;
int vector;
if (!s->vapic_paddr) {
return;
}
if (sync_type & SYNC_FROM_VAPIC) {
cpu_physical_memory_read(NULL, s->vapic_paddr, &vapic_state,
sizeof(vapic_state));
s->tpr = vapic_state.tpr;
}
if (sync_type & (SYNC_TO_VAPIC | SYNC_ISR_IRR_TO_VAPIC)) {
//start = offsetof(VAPICState, isr);
//length = offsetof(VAPICState, enabled) - offsetof(VAPICState, isr);
if (sync_type & SYNC_TO_VAPIC) {
vapic_state.tpr = s->tpr;
vapic_state.enabled = 1;
//start = 0;
//length = sizeof(VAPICState);
}
vector = get_highest_priority_int(s->isr);
if (vector < 0) {
vector = 0;
}
vapic_state.isr = vector & 0xf0;
vapic_state.zero = 0;
vector = get_highest_priority_int(s->irr);
if (vector < 0) {
vector = 0;
}
vapic_state.irr = vector & 0xff;
//cpu_physical_memory_write_rom(&address_space_memory,
// s->vapic_paddr + start,
// ((void *)&vapic_state) + start, length);
// FIXME qq
}
}
static void apic_vapic_base_update(APICCommonState *s)
{
apic_sync_vapic(s, SYNC_TO_VAPIC);
}
#define foreach_apic(apic, deliver_bitmask, code) \
{\
int __i, __j;\
for(__i = 0; __i < MAX_APIC_WORDS; __i++) {\
uint32_t __mask = deliver_bitmask[__i];\
if (__mask) {\
for(__j = 0; __j < 32; __j++) {\
if (__mask & (1U << __j)) {\
apic = local_apics[__i * 32 + __j];\
if (apic) {\
code;\
}\
}\
}\
}\
}\
}
static void apic_set_base(APICCommonState *s, uint64_t val)
{
s->apicbase = (val & 0xfffff000) |
(s->apicbase & (MSR_IA32_APICBASE_BSP | MSR_IA32_APICBASE_ENABLE));
/* if disabled, cannot be enabled again */
if (!(val & MSR_IA32_APICBASE_ENABLE)) {
s->apicbase &= ~MSR_IA32_APICBASE_ENABLE;
cpu_clear_apic_feature(&s->cpu->env);
s->spurious_vec &= ~APIC_SV_ENABLE;
}
}
static void apic_set_tpr(APICCommonState *s, uint8_t val)
{
/* Updates from cr8 are ignored while the VAPIC is active */
if (!s->vapic_paddr) {
s->tpr = val << 4;
apic_update_irq(s);
}
}
static uint8_t apic_get_tpr(APICCommonState *s)
{
apic_sync_vapic(s, SYNC_FROM_VAPIC);
return s->tpr >> 4;
}
/* signal the CPU if an irq is pending */
static void apic_update_irq(APICCommonState *s)
{
}
void apic_poll_irq(DeviceState *dev)
{
}
void apic_sipi(DeviceState *dev)
{
}
int apic_get_interrupt(DeviceState *dev)
{
return 0;
}
int apic_accept_pic_intr(DeviceState *dev)
{
return 0;
}
static void apic_pre_save(APICCommonState *s)
{
apic_sync_vapic(s, SYNC_FROM_VAPIC);
}
static void apic_post_load(APICCommonState *s)
{
#if 0
if (s->timer_expiry != -1) {
timer_mod(s->timer, s->timer_expiry);
} else {
timer_del(s->timer);
}
#endif
}
static int apic_realize(struct uc_struct *uc, DeviceState *dev, Error **errp)
{
return 0;
}
static void apic_class_init(struct uc_struct *uc, ObjectClass *klass, void *data)
{
APICCommonClass *k = APIC_COMMON_CLASS(uc, klass);
k->realize = apic_realize;
k->set_base = apic_set_base;
k->set_tpr = apic_set_tpr;
k->get_tpr = apic_get_tpr;
k->vapic_base_update = apic_vapic_base_update;
k->pre_save = apic_pre_save;
k->post_load = apic_post_load;
//printf("... init apic class\n");
}
static const TypeInfo apic_info = {
"apic",
TYPE_APIC_COMMON,
0,
sizeof(APICCommonState),
NULL,
NULL,
NULL,
NULL,
NULL,
apic_class_init,
};
void apic_register_types(struct uc_struct *uc)
{
//printf("... register apic types\n");
type_register_static(uc, &apic_info);
}

View File

@ -1,274 +0,0 @@
/*
* APIC support - common bits of emulated and KVM kernel model
*
* Copyright (c) 2004-2005 Fabrice Bellard
* Copyright (c) 2011 Jan Kiszka, Siemens AG
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>
*/
#include "hw/i386/apic.h"
#include "hw/i386/apic_internal.h"
#include "hw/qdev.h"
#include "uc_priv.h"
void cpu_set_apic_base(struct uc_struct *uc, DeviceState *dev, uint64_t val)
{
if (dev) {
APICCommonState *s = APIC_COMMON(uc, dev);
APICCommonClass *info = APIC_COMMON_GET_CLASS(uc, s);
info->set_base(s, val);
}
}
uint64_t cpu_get_apic_base(struct uc_struct *uc, DeviceState *dev)
{
if (dev) {
APICCommonState *s = APIC_COMMON(uc, dev);
return s->apicbase;
} else {
return MSR_IA32_APICBASE_BSP;
}
}
void cpu_set_apic_tpr(struct uc_struct *uc, DeviceState *dev, uint8_t val)
{
APICCommonState *s;
APICCommonClass *info;
if (!dev) {
return;
}
s = APIC_COMMON(uc, dev);
info = APIC_COMMON_GET_CLASS(uc, s);
info->set_tpr(s, val);
}
uint8_t cpu_get_apic_tpr(struct uc_struct *uc, DeviceState *dev)
{
APICCommonState *s;
APICCommonClass *info;
if (!dev) {
return 0;
}
s = APIC_COMMON(uc, dev);
info = APIC_COMMON_GET_CLASS(uc, s);
return info->get_tpr(s);
}
void apic_enable_vapic(struct uc_struct *uc, DeviceState *dev, hwaddr paddr)
{
APICCommonState *s = APIC_COMMON(uc, dev);
APICCommonClass *info = APIC_COMMON_GET_CLASS(uc, s);
s->vapic_paddr = paddr;
info->vapic_base_update(s);
}
void apic_handle_tpr_access_report(DeviceState *dev, target_ulong ip,
TPRAccess access)
{
//APICCommonState *s = APIC_COMMON(NULL, dev);
//vapic_report_tpr_access(s->vapic, CPU(s->cpu), ip, access);
}
bool apic_next_timer(APICCommonState *s, int64_t current_time)
{
int64_t d;
/* We need to store the timer state separately to support APIC
* implementations that maintain a non-QEMU timer, e.g. inside the
* host kernel. This open-coded state allows us to migrate between
* both models. */
s->timer_expiry = -1;
if (s->lvt[APIC_LVT_TIMER] & APIC_LVT_MASKED) {
return false;
}
d = (current_time - s->initial_count_load_time) >> s->count_shift;
if (s->lvt[APIC_LVT_TIMER] & APIC_LVT_TIMER_PERIODIC) {
if (!s->initial_count) {
return false;
}
d = ((d / ((uint64_t)s->initial_count + 1)) + 1) *
((uint64_t)s->initial_count + 1);
} else {
if (d >= s->initial_count) {
return false;
}
d = (uint64_t)s->initial_count + 1;
}
s->next_time = s->initial_count_load_time + (d << s->count_shift);
s->timer_expiry = s->next_time;
return true;
}
void apic_init_reset(struct uc_struct *uc, DeviceState *dev)
{
APICCommonState *s = APIC_COMMON(uc, dev);
APICCommonClass *info = APIC_COMMON_GET_CLASS(uc, s);
int i;
if (!s) {
return;
}
s->tpr = 0;
s->spurious_vec = 0xff;
s->log_dest = 0;
s->dest_mode = 0xf;
memset(s->isr, 0, sizeof(s->isr));
memset(s->tmr, 0, sizeof(s->tmr));
memset(s->irr, 0, sizeof(s->irr));
for (i = 0; i < APIC_LVT_NB; i++) {
s->lvt[i] = APIC_LVT_MASKED;
}
s->esr = 0;
memset(s->icr, 0, sizeof(s->icr));
s->divide_conf = 0;
s->count_shift = 0;
s->initial_count = 0;
s->initial_count_load_time = 0;
s->next_time = 0;
s->wait_for_sipi = !cpu_is_bsp(s->cpu);
if (s->timer) {
// timer_del(s->timer);
}
s->timer_expiry = -1;
if (info->reset) {
info->reset(s);
}
}
void apic_designate_bsp(struct uc_struct *uc, DeviceState *dev)
{
APICCommonState *s;
if (dev == NULL) {
return;
}
s = APIC_COMMON(uc, dev);
s->apicbase |= MSR_IA32_APICBASE_BSP;
}
static void apic_reset_common(struct uc_struct *uc, DeviceState *dev)
{
APICCommonState *s = APIC_COMMON(uc, dev);
APICCommonClass *info = APIC_COMMON_GET_CLASS(uc, s);
bool bsp;
bsp = cpu_is_bsp(s->cpu);
s->apicbase = APIC_DEFAULT_ADDRESS |
(bsp ? MSR_IA32_APICBASE_BSP : 0) | MSR_IA32_APICBASE_ENABLE;
s->vapic_paddr = 0;
info->vapic_base_update(s);
apic_init_reset(uc, dev);
if (bsp) {
/*
* LINT0 delivery mode on CPU #0 is set to ExtInt at initialization
* time typically by BIOS, so PIC interrupt can be delivered to the
* processor when local APIC is enabled.
*/
s->lvt[APIC_LVT_LINT0] = 0x700;
}
}
static int apic_common_realize(struct uc_struct *uc, DeviceState *dev, Error **errp)
{
APICCommonState *s = APIC_COMMON(uc, dev);
APICCommonClass *info;
if (uc->apic_no >= MAX_APICS) {
error_setg(errp, "%s initialization failed.",
object_get_typename(OBJECT(dev)));
return -1;
}
s->idx = uc->apic_no++;
info = APIC_COMMON_GET_CLASS(uc, s);
info->realize(uc, dev, errp);
if (!uc->mmio_registered) {
ICCBus *b = ICC_BUS(uc, qdev_get_parent_bus(dev));
memory_region_add_subregion(b->apic_address_space, 0, &s->io_memory);
uc->mmio_registered = true;
}
/* Note: We need at least 1M to map the VAPIC option ROM */
if (!uc->vapic && s->vapic_control & VAPIC_ENABLE_MASK) {
// ram_size >= 1024 * 1024) { // FIXME
uc->vapic = NULL;
}
s->vapic = uc->vapic;
if (uc->apic_report_tpr_access && info->enable_tpr_reporting) {
info->enable_tpr_reporting(s, true);
}
return 0;
}
static void apic_common_class_init(struct uc_struct *uc, ObjectClass *klass, void *data)
{
ICCDeviceClass *idc = ICC_DEVICE_CLASS(uc, klass);
DeviceClass *dc = DEVICE_CLASS(uc, klass);
dc->reset = apic_reset_common;
idc->realize = apic_common_realize;
/*
* Reason: APIC and CPU need to be wired up by
* x86_cpu_apic_create()
*/
dc->cannot_instantiate_with_device_add_yet = true;
//printf("... init apic common class\n");
}
static const TypeInfo apic_common_type = {
TYPE_APIC_COMMON,
TYPE_DEVICE,
sizeof(APICCommonClass),
sizeof(APICCommonState),
NULL,
NULL,
NULL,
NULL,
NULL,
apic_common_class_init,
NULL,
NULL,
true,
};
void apic_common_register_types(struct uc_struct *uc)
{
//printf("... register apic common\n");
type_register_static(uc, &apic_common_type);
}

View File

@ -1 +0,0 @@
obj-y += dummy_m68k.o

View File

@ -1,50 +0,0 @@
/*
* Dummy board with just RAM and CPU for use as an ISS.
*
* Copyright (c) 2007 CodeSourcery.
*
* This code is licensed under the GPL
*/
/* Unicorn Emulator Engine */
/* By Nguyen Anh Quynh, 2015 */
#include "hw/hw.h"
#include "hw/m68k/m68k.h"
#include "hw/boards.h"
#include "exec/address-spaces.h"
/* Board init. */
static int dummy_m68k_init(struct uc_struct *uc, MachineState *machine)
{
const char *cpu_model = machine->cpu_model;
CPUM68KState *env;
if (!cpu_model)
cpu_model = "cfv4e";
env = cpu_init(uc, cpu_model);
if (!env) {
fprintf(stderr, "Unable to find m68k CPU definition\n");
return -1;
}
/* Initialize CPU registers. */
env->vbr = 0;
env->pc = 0;
return 0;
}
void dummy_m68k_machine_init(struct uc_struct *uc)
{
static QEMUMachine dummy_m68k_machine = { 0 };
dummy_m68k_machine.name = "dummy",
dummy_m68k_machine.init = dummy_m68k_init,
dummy_m68k_machine.is_default = 1,
dummy_m68k_machine.arch = UC_ARCH_M68K,
//printf(">>> dummy_m68k_machine_init\n");
qemu_register_machine(uc, &dummy_m68k_machine, TYPE_MACHINE, NULL);
}

View File

@ -1,2 +0,0 @@
obj-y += mips_r4k.o
obj-y += addr.o cputimer.o

View File

@ -1,57 +0,0 @@
/*
* QEMU/MIPS pseudo-board
*
* emulates a simple machine with ISA-like bus.
* ISA IO space mapped to the 0x14000000 (PHYS) and
* ISA memory at the 0x10000000 (PHYS, 16Mb in size).
* All peripherial devices are attached to this "bus" with
* the standard PC ISA addresses.
*/
/* Unicorn Emulator Engine */
/* By Nguyen Anh Quynh, 2015 */
#include "hw/hw.h"
#include "hw/mips/mips.h"
#include "hw/mips/cpudevs.h"
#include "sysemu/sysemu.h"
#include "hw/boards.h"
#include "exec/address-spaces.h"
static int mips_r4k_init(struct uc_struct *uc, MachineState *machine)
{
const char *cpu_model = machine->cpu_model;
/* init CPUs */
if (cpu_model == NULL) {
#ifdef TARGET_MIPS64
cpu_model = "R4000";
#else
cpu_model = "24Kf";
#endif
}
uc->cpu = (void*) cpu_mips_init(uc, cpu_model);
if (uc->cpu == NULL) {
fprintf(stderr, "Unable to find CPU definition\n");
return -1;
}
return 0;
}
void mips_machine_init(struct uc_struct *uc)
{
static QEMUMachine mips_machine = {
NULL,
"mips",
mips_r4k_init,
NULL,
0,
1,
UC_ARCH_MIPS,
};
qemu_register_machine(uc, &mips_machine, TYPE_MACHINE, NULL);
}

1569
qemu/hw/ppc/ppc.c Normal file

File diff suppressed because it is too large Load Diff

373
qemu/hw/ppc/ppc_booke.c Normal file
View File

@ -0,0 +1,373 @@
/*
* QEMU PowerPC Booke hardware System Emulator
*
* Copyright (c) 2011 AdaCore
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "hw/ppc/ppc.h"
#include "qemu/timer.h"
#include "qemu/log.h"
// #include "kvm_ppc.h"
/* Timer Control Register */
#define TCR_WP_SHIFT 30 /* Watchdog Timer Period */
#define TCR_WP_MASK (0x3U << TCR_WP_SHIFT)
#define TCR_WRC_SHIFT 28 /* Watchdog Timer Reset Control */
#define TCR_WRC_MASK (0x3U << TCR_WRC_SHIFT)
#define TCR_WIE (1U << 27) /* Watchdog Timer Interrupt Enable */
#define TCR_DIE (1U << 26) /* Decrementer Interrupt Enable */
#define TCR_FP_SHIFT 24 /* Fixed-Interval Timer Period */
#define TCR_FP_MASK (0x3U << TCR_FP_SHIFT)
#define TCR_FIE (1U << 23) /* Fixed-Interval Timer Interrupt Enable */
#define TCR_ARE (1U << 22) /* Auto-Reload Enable */
/* Timer Control Register (e500 specific fields) */
#define TCR_E500_FPEXT_SHIFT 13 /* Fixed-Interval Timer Period Extension */
#define TCR_E500_FPEXT_MASK (0xf << TCR_E500_FPEXT_SHIFT)
#define TCR_E500_WPEXT_SHIFT 17 /* Watchdog Timer Period Extension */
#define TCR_E500_WPEXT_MASK (0xf << TCR_E500_WPEXT_SHIFT)
/* Timer Status Register */
#define TSR_FIS (1U << 26) /* Fixed-Interval Timer Interrupt Status */
#define TSR_DIS (1U << 27) /* Decrementer Interrupt Status */
#define TSR_WRS_SHIFT 28 /* Watchdog Timer Reset Status */
#define TSR_WRS_MASK (0x3U << TSR_WRS_SHIFT)
#define TSR_WIS (1U << 30) /* Watchdog Timer Interrupt Status */
#define TSR_ENW (1U << 31) /* Enable Next Watchdog Timer */
typedef struct booke_timer_t booke_timer_t;
struct booke_timer_t {
uint64_t fit_next;
QEMUTimer *fit_timer;
uint64_t wdt_next;
QEMUTimer *wdt_timer;
uint32_t flags;
};
static void booke_update_irq(PowerPCCPU *cpu)
{
CPUPPCState *env = &cpu->env;
ppc_set_irq(cpu, PPC_INTERRUPT_DECR,
(env->spr[SPR_BOOKE_TSR] & TSR_DIS
&& env->spr[SPR_BOOKE_TCR] & TCR_DIE));
ppc_set_irq(cpu, PPC_INTERRUPT_WDT,
(env->spr[SPR_BOOKE_TSR] & TSR_WIS
&& env->spr[SPR_BOOKE_TCR] & TCR_WIE));
ppc_set_irq(cpu, PPC_INTERRUPT_FIT,
(env->spr[SPR_BOOKE_TSR] & TSR_FIS
&& env->spr[SPR_BOOKE_TCR] & TCR_FIE));
}
/* Return the location of the bit of time base at which the FIT will raise an
interrupt */
static uint8_t booke_get_fit_target(CPUPPCState *env, ppc_tb_t *tb_env)
{
uint8_t fp = (env->spr[SPR_BOOKE_TCR] & TCR_FP_MASK) >> TCR_FP_SHIFT;
if (tb_env->flags & PPC_TIMER_E500) {
/* e500 Fixed-interval timer period extension */
uint32_t fpext = (env->spr[SPR_BOOKE_TCR] & TCR_E500_FPEXT_MASK)
>> TCR_E500_FPEXT_SHIFT;
fp = 63 - (fp | fpext << 2);
} else {
fp = env->fit_period[fp];
}
return fp;
}
/* Return the location of the bit of time base at which the WDT will raise an
interrupt */
static uint8_t booke_get_wdt_target(CPUPPCState *env, ppc_tb_t *tb_env)
{
uint8_t wp = (env->spr[SPR_BOOKE_TCR] & TCR_WP_MASK) >> TCR_WP_SHIFT;
if (tb_env->flags & PPC_TIMER_E500) {
/* e500 Watchdog timer period extension */
uint32_t wpext = (env->spr[SPR_BOOKE_TCR] & TCR_E500_WPEXT_MASK)
>> TCR_E500_WPEXT_SHIFT;
wp = 63 - (wp | wpext << 2);
} else {
wp = env->wdt_period[wp];
}
return wp;
}
static void booke_update_fixed_timer(CPUPPCState *env,
uint8_t target_bit,
uint64_t *next,
QEMUTimer *timer,
int tsr_bit)
{
#if 0
ppc_tb_t *tb_env = env->tb_env;
uint64_t delta_tick, ticks = 0;
uint64_t tb;
uint64_t period;
uint64_t now;
if (!(env->spr[SPR_BOOKE_TSR] & tsr_bit)) {
/*
* Don't arm the timer again when the guest has the current
* interrupt still pending. Wait for it to ack it.
*/
return;
}
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
tb = cpu_ppc_get_tb(tb_env, now, tb_env->tb_offset);
period = 1ULL << target_bit;
delta_tick = period - (tb & (period - 1));
/* the timer triggers only when the selected bit toggles from 0 to 1 */
if (tb & period) {
ticks = period;
}
if (ticks + delta_tick < ticks) {
/* Overflow, so assume the biggest number we can express. */
ticks = UINT64_MAX;
} else {
ticks += delta_tick;
}
*next = now + muldiv64(ticks, NANOSECONDS_PER_SECOND, tb_env->tb_freq);
if ((*next < now) || (*next > INT64_MAX)) {
/* Overflow, so assume the biggest number the qemu timer supports. */
*next = INT64_MAX;
}
/* XXX: If expire time is now. We can't run the callback because we don't
* have access to it. So we just set the timer one nanosecond later.
*/
if (*next == now) {
(*next)++;
} else {
/*
* There's no point to fake any granularity that's more fine grained
* than milliseconds. Anything beyond that just overloads the system.
*/
*next = MAX(*next, now + SCALE_MS);
}
/* Fire the next timer */
timer_mod(timer, *next);
#endif
}
static void booke_decr_cb(void *opaque)
{
PowerPCCPU *cpu = opaque;
CPUPPCState *env = &cpu->env;
env->spr[SPR_BOOKE_TSR] |= TSR_DIS;
booke_update_irq(cpu);
if (env->spr[SPR_BOOKE_TCR] & TCR_ARE) {
/* Do not reload 0, it is already there. It would just trigger
* the timer again and lead to infinite loop */
if (env->spr[SPR_BOOKE_DECAR] != 0) {
/* Auto Reload */
cpu_ppc_store_decr(env, env->spr[SPR_BOOKE_DECAR]);
}
}
}
static void booke_fit_cb(void *opaque)
{
PowerPCCPU *cpu = opaque;
CPUPPCState *env = &cpu->env;
ppc_tb_t *tb_env;
booke_timer_t *booke_timer;
tb_env = env->tb_env;
booke_timer = tb_env->opaque;
env->spr[SPR_BOOKE_TSR] |= TSR_FIS;
booke_update_irq(cpu);
booke_update_fixed_timer(env,
booke_get_fit_target(env, tb_env),
&booke_timer->fit_next,
booke_timer->fit_timer,
TSR_FIS);
}
static void booke_wdt_cb(void *opaque)
{
PowerPCCPU *cpu = opaque;
CPUPPCState *env = &cpu->env;
ppc_tb_t *tb_env;
booke_timer_t *booke_timer;
tb_env = env->tb_env;
booke_timer = tb_env->opaque;
/* TODO: There's lots of complicated stuff to do here */
booke_update_irq(cpu);
booke_update_fixed_timer(env,
booke_get_wdt_target(env, tb_env),
&booke_timer->wdt_next,
booke_timer->wdt_timer,
TSR_WIS);
}
void store_booke_tsr(CPUPPCState *env, target_ulong val)
{
PowerPCCPU *cpu = env_archcpu(env);
ppc_tb_t *tb_env = env->tb_env;
booke_timer_t *booke_timer = tb_env->opaque;
env->spr[SPR_BOOKE_TSR] &= ~val;
// kvmppc_clear_tsr_bits(cpu, val);
if (val & TSR_FIS) {
booke_update_fixed_timer(env,
booke_get_fit_target(env, tb_env),
&booke_timer->fit_next,
booke_timer->fit_timer,
TSR_FIS);
}
if (val & TSR_WIS) {
booke_update_fixed_timer(env,
booke_get_wdt_target(env, tb_env),
&booke_timer->wdt_next,
booke_timer->wdt_timer,
TSR_WIS);
}
booke_update_irq(cpu);
}
void store_booke_tcr(CPUPPCState *env, target_ulong val)
{
PowerPCCPU *cpu = env_archcpu(env);
ppc_tb_t *tb_env = env->tb_env;
booke_timer_t *booke_timer = tb_env->opaque;
env->spr[SPR_BOOKE_TCR] = val;
// kvmppc_set_tcr(cpu);
booke_update_irq(cpu);
booke_update_fixed_timer(env,
booke_get_fit_target(env, tb_env),
&booke_timer->fit_next,
booke_timer->fit_timer,
TSR_FIS);
booke_update_fixed_timer(env,
booke_get_wdt_target(env, tb_env),
&booke_timer->wdt_next,
booke_timer->wdt_timer,
TSR_WIS);
}
#if 0
static void ppc_booke_timer_reset_handle(void *opaque)
{
PowerPCCPU *cpu = opaque;
CPUPPCState *env = &cpu->env;
store_booke_tcr(env, 0);
store_booke_tsr(env, -1);
}
/*
* This function will be called whenever the CPU state changes.
* CPU states are defined "typedef enum RunState".
* Regarding timer, When CPU state changes to running after debug halt
* or similar cases which takes time then in between final watchdog
* expiry happenes. This will cause exit to QEMU and configured watchdog
* action will be taken. To avoid this we always clear the watchdog state when
* state changes to running.
*/
static void cpu_state_change_handler(void *opaque, int running, RunState state)
{
PowerPCCPU *cpu = opaque;
CPUPPCState *env = &cpu->env;
if (!running) {
return;
}
/*
* Clear watchdog interrupt condition by clearing TSR.
*/
store_booke_tsr(env, TSR_ENW | TSR_WIS | TSR_WRS_MASK);
}
#endif
void ppc_booke_timers_init(PowerPCCPU *cpu, uint32_t freq, uint32_t flags)
{
ppc_tb_t *tb_env;
booke_timer_t *booke_timer;
tb_env = g_malloc0(sizeof(ppc_tb_t));
booke_timer = g_malloc0(sizeof(booke_timer_t));
cpu->env.tb_env = tb_env;
tb_env->flags = flags | PPC_TIMER_BOOKE | PPC_DECR_ZERO_TRIGGERED;
tb_env->tb_freq = freq;
tb_env->decr_freq = freq;
tb_env->opaque = booke_timer;
tb_env->decr_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, &booke_decr_cb, cpu);
booke_timer->fit_timer =
timer_new_ns(QEMU_CLOCK_VIRTUAL, &booke_fit_cb, cpu);
booke_timer->wdt_timer =
timer_new_ns(QEMU_CLOCK_VIRTUAL, &booke_wdt_cb, cpu);
#if 0
int ret = 0;
ret = kvmppc_booke_watchdog_enable(cpu);
if (ret) {
/* TODO: Start the QEMU emulated watchdog if not running on KVM.
* Also start the QEMU emulated watchdog if KVM does not support
* emulated watchdog or somehow it is not enabled (supported but
* not enabled is though some bug and requires debugging :)).
*/
}
qemu_add_vm_change_state_handler(cpu_state_change_handler, cpu);
qemu_register_reset(ppc_booke_timer_reset_handle, cpu);
#endif
}

125
qemu/hw/s390x/s390-skeys.c Normal file
View File

@ -0,0 +1,125 @@
/*
* s390 storage key device
*
* Copyright 2015 IBM Corp.
* Author(s): Jason J. Herne <jjherne@linux.vnet.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or (at
* your option) any later version. See the COPYING file in the top-level
* directory.
*/
#include "qemu/osdep.h"
#include "qemu/units.h"
#include "target/s390x/cpu.h"
#include "hw/s390x/storage-keys.h"
#define S390_SKEYS_BUFFER_SIZE (128 * KiB) /* Room for 128k storage keys */
#define S390_SKEYS_SAVE_FLAG_EOS 0x01
#define S390_SKEYS_SAVE_FLAG_SKEYS 0x02
#define S390_SKEYS_SAVE_FLAG_ERROR 0x04
static void s390_skeys_class_init(uc_engine *uc, S390SKeysClass* class);
static void qemu_s390_skeys_class_init(uc_engine *uc, S390SKeysClass* skeyclass);
static void s390_skeys_instance_init(uc_engine *uc, S390SKeysState* ss);
static void qemu_s390_skeys_init(uc_engine *uc, QEMUS390SKeysState *skey);
void s390_skeys_init(uc_engine *uc)
{
S390CPU *cpu = S390_CPU(uc->cpu);
s390_skeys_class_init(uc, &cpu->skey);
qemu_s390_skeys_class_init(uc, &cpu->skey);
s390_skeys_instance_init(uc, (S390SKeysState*)&cpu->ss);
qemu_s390_skeys_init(uc, &cpu->ss);
cpu->ss.class = &cpu->skey;
}
static void qemu_s390_skeys_init(uc_engine *uc, QEMUS390SKeysState *skeys)
{
//QEMUS390SKeysState *skeys = QEMU_S390_SKEYS(obj);
//MachineState *machine = MACHINE(qdev_get_machine());
//skeys->key_count = machine->ram_size / TARGET_PAGE_SIZE;
// Unicorn: Allow users to configure this value?
skeys->key_count = 0x20000000 / TARGET_PAGE_SIZE;
skeys->keydata = g_malloc0(skeys->key_count);
}
static int qemu_s390_skeys_enabled(S390SKeysState *ss)
{
return 1;
}
/*
* TODO: for memory hotplug support qemu_s390_skeys_set and qemu_s390_skeys_get
* will have to make sure that the given gfn belongs to a memory region and not
* a memory hole.
*/
static int qemu_s390_skeys_set(S390SKeysState *ss, uint64_t start_gfn,
uint64_t count, uint8_t *keys)
{
QEMUS390SKeysState *skeydev = QEMU_S390_SKEYS(ss);
int i;
/* Check for uint64 overflow and access beyond end of key data */
if (start_gfn + count > skeydev->key_count || start_gfn + count < count) {
// error_report("Error: Setting storage keys for page beyond the end "
// "of memory: gfn=%" PRIx64 " count=%" PRId64,
// start_gfn, count);
return -EINVAL;
}
for (i = 0; i < count; i++) {
skeydev->keydata[start_gfn + i] = keys[i];
}
return 0;
}
static int qemu_s390_skeys_get(S390SKeysState *ss, uint64_t start_gfn,
uint64_t count, uint8_t *keys)
{
QEMUS390SKeysState *skeydev = QEMU_S390_SKEYS(ss);
int i;
/* Check for uint64 overflow and access beyond end of key data */
if (start_gfn + count > skeydev->key_count || start_gfn + count < count) {
// error_report("Error: Getting storage keys for page beyond the end "
// "of memory: gfn=%" PRIx64 " count=%" PRId64,
// start_gfn, count);
return -EINVAL;
}
for (i = 0; i < count; i++) {
keys[i] = skeydev->keydata[start_gfn + i];
}
return 0;
}
static void qemu_s390_skeys_class_init(uc_engine *uc, S390SKeysClass* skeyclass)
{
// S390SKeysClass *skeyclass = S390_SKEYS_CLASS(oc);
// DeviceClass *dc = DEVICE_CLASS(oc);
skeyclass->skeys_enabled = qemu_s390_skeys_enabled;
skeyclass->get_skeys = qemu_s390_skeys_get;
skeyclass->set_skeys = qemu_s390_skeys_set;
/* Reason: Internal device (only one skeys device for the whole memory) */
// dc->user_creatable = false;
}
static void s390_skeys_instance_init(uc_engine *uc, S390SKeysState* ss)
{
ss->migration_enabled = true;
}
static void s390_skeys_class_init(uc_engine *uc, S390SKeysClass* class)
{
// DeviceClass *dc = DEVICE_CLASS(oc);
// dc->hotpluggable = false;
// set_bit(DEVICE_CATEGORY_MISC, dc->categories);
}

View File

@ -1 +0,0 @@
obj-y += leon3.o

View File

@ -1,72 +0,0 @@
/*
* QEMU Leon3 System Emulator
*
* Copyright (c) 2010-2011 AdaCore
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/* Unicorn Emulator Engine */
/* By Nguyen Anh Quynh, 2015 */
#include "hw/hw.h"
#include "hw/sparc/sparc.h"
#include "qemu/timer.h"
#include "sysemu/sysemu.h"
#include "hw/boards.h"
#include "exec/address-spaces.h"
static int leon3_generic_hw_init(struct uc_struct *uc, MachineState *machine)
{
const char *cpu_model = machine->cpu_model;
SPARCCPU *cpu;
/* Init CPU */
if (!cpu_model) {
cpu_model = "LEON3";
}
cpu = cpu_sparc_init(uc, cpu_model);
uc->cpu = CPU(cpu);
if (cpu == NULL) {
fprintf(stderr, "qemu: Unable to find Sparc CPU definition\n");
return -1;
}
cpu_sparc_set_id(&cpu->env, 0);
return 0;
}
void leon3_machine_init(struct uc_struct *uc)
{
static QEMUMachine leon3_generic_machine = {
NULL,
"leon3_generic",
leon3_generic_hw_init,
NULL,
0,
1,
UC_ARCH_SPARC,
};
//printf(">>> leon3_machine_init\n");
qemu_register_machine(uc, &leon3_generic_machine, TYPE_MACHINE, NULL);
}

View File

@ -1 +0,0 @@
obj-y += sun4u.o

View File

@ -1,2 +0,0 @@
#include "config-host.h"
#include "config-target.h"

View File

@ -10,27 +10,26 @@ struct aes_key_st {
};
typedef struct aes_key_st AES_KEY;
/* FreeBSD has its own AES_set_decrypt_key in -lcrypto, avoid conflicts */
#ifdef __FreeBSD__
/* FreeBSD/OpenSSL have their own AES functions with the same names in -lcrypto
* (which might be pulled in via curl), so redefine to avoid conflicts. */
#define AES_set_encrypt_key QEMU_AES_set_encrypt_key
#define AES_set_decrypt_key QEMU_AES_set_decrypt_key
#define AES_encrypt QEMU_AES_encrypt
#define AES_decrypt QEMU_AES_decrypt
#define AES_cbc_encrypt QEMU_AES_cbc_encrypt
#endif
int AES_set_encrypt_key(const unsigned char *userKey, const int bits,
AES_KEY *key);
AES_KEY *key);
int AES_set_decrypt_key(const unsigned char *userKey, const int bits,
AES_KEY *key);
AES_KEY *key);
void AES_encrypt(const unsigned char *in, unsigned char *out,
const AES_KEY *key);
const AES_KEY *key);
void AES_decrypt(const unsigned char *in, unsigned char *out,
const AES_KEY *key);
const AES_KEY *key);
void AES_cbc_encrypt(const unsigned char *in, unsigned char *out,
const unsigned long length, const AES_KEY *key,
unsigned char *ivec, const int enc);
const unsigned long length, const AES_KEY *key,
unsigned char *ivec, const int enc);
extern const uint8_t AES_sbox[256];
extern const uint8_t AES_isbox[256];

View File

@ -1,12 +1,12 @@
/*
* Logging support
* QEMU Crypto initialization
*
* Copyright (c) 2003 Fabrice Bellard
* Copyright (c) 2015 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
@ -15,33 +15,14 @@
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "qemu-common.h"
#include "qemu/log.h"
#ifndef QCRYPTO_INIT_H
#define QCRYPTO_INIT_H
FILE *qemu_logfile;
int qemu_loglevel;
#include "qapi/error.h"
void qemu_log(const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
if (qemu_logfile) {
vfprintf(qemu_logfile, fmt, ap);
}
va_end(ap);
}
void qemu_log_mask(int mask, const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
if ((qemu_loglevel & mask) && qemu_logfile) {
vfprintf(qemu_logfile, fmt, ap);
}
va_end(ap);
}
int qcrypto_init(Error **errp);
#endif /* QCRYPTO_INIT_H */

View File

@ -0,0 +1,33 @@
/*
* QEMU Crypto random number provider
*
* Copyright (c) 2015-2016 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef QCRYPTO_RANDOM_H
#define QCRYPTO_RANDOM_H
/**
* qcrypto_random_init:
*
* Initializes the handles used by qcrypto_random_bytes
*
* Returns 0 on success, -1 on error
*/
int qcrypto_random_init(void);
#endif /* QCRYPTO_RANDOM_H */

View File

@ -0,0 +1,519 @@
/* Interface between the opcode library and its callers.
Written by Cygnus Support, 1993.
The opcode library (libopcodes.a) provides instruction decoders for
a large variety of instruction sets, callable with an identical
interface, for making instruction-processing programs more independent
of the instruction set being processed. */
#ifndef DISAS_DIS_ASM_H
#define DISAS_DIS_ASM_H
typedef void *PTR;
typedef uint64_t bfd_vma;
typedef int64_t bfd_signed_vma;
typedef uint8_t bfd_byte;
#define sprintf_vma(s,x) sprintf (s, "%0" PRIx64, x)
#define snprintf_vma(s,ss,x) snprintf (s, ss, "%0" PRIx64, x)
#define BFD64
enum bfd_flavour {
bfd_target_unknown_flavour,
bfd_target_aout_flavour,
bfd_target_coff_flavour,
bfd_target_ecoff_flavour,
bfd_target_elf_flavour,
bfd_target_ieee_flavour,
bfd_target_nlm_flavour,
bfd_target_oasys_flavour,
bfd_target_tekhex_flavour,
bfd_target_srec_flavour,
bfd_target_ihex_flavour,
bfd_target_som_flavour,
bfd_target_os9k_flavour,
bfd_target_versados_flavour,
bfd_target_msdos_flavour,
bfd_target_evax_flavour
};
enum bfd_endian { BFD_ENDIAN_BIG, BFD_ENDIAN_LITTLE, BFD_ENDIAN_UNKNOWN };
enum bfd_architecture
{
bfd_arch_unknown, /* File arch not known */
bfd_arch_obscure, /* Arch known, not one of these */
bfd_arch_m68k, /* Motorola 68xxx */
#define bfd_mach_m68000 1
#define bfd_mach_m68008 2
#define bfd_mach_m68010 3
#define bfd_mach_m68020 4
#define bfd_mach_m68030 5
#define bfd_mach_m68040 6
#define bfd_mach_m68060 7
#define bfd_mach_cpu32 8
#define bfd_mach_mcf5200 9
#define bfd_mach_mcf5206e 10
#define bfd_mach_mcf5307 11
#define bfd_mach_mcf5407 12
#define bfd_mach_mcf528x 13
#define bfd_mach_mcfv4e 14
#define bfd_mach_mcf521x 15
#define bfd_mach_mcf5249 16
#define bfd_mach_mcf547x 17
#define bfd_mach_mcf548x 18
bfd_arch_vax, /* DEC Vax */
bfd_arch_i960, /* Intel 960 */
/* The order of the following is important.
lower number indicates a machine type that
only accepts a subset of the instructions
available to machines with higher numbers.
The exception is the "ca", which is
incompatible with all other machines except
"core". */
#define bfd_mach_i960_core 1
#define bfd_mach_i960_ka_sa 2
#define bfd_mach_i960_kb_sb 3
#define bfd_mach_i960_mc 4
#define bfd_mach_i960_xa 5
#define bfd_mach_i960_ca 6
#define bfd_mach_i960_jx 7
#define bfd_mach_i960_hx 8
bfd_arch_a29k, /* AMD 29000 */
bfd_arch_sparc, /* SPARC */
#define bfd_mach_sparc 1
/* The difference between v8plus and v9 is that v9 is a true 64 bit env. */
#define bfd_mach_sparc_sparclet 2
#define bfd_mach_sparc_sparclite 3
#define bfd_mach_sparc_v8plus 4
#define bfd_mach_sparc_v8plusa 5 /* with ultrasparc add'ns. */
#define bfd_mach_sparc_sparclite_le 6
#define bfd_mach_sparc_v9 7
#define bfd_mach_sparc_v9a 8 /* with ultrasparc add'ns. */
#define bfd_mach_sparc_v8plusb 9 /* with cheetah add'ns. */
#define bfd_mach_sparc_v9b 10 /* with cheetah add'ns. */
/* Nonzero if MACH has the v9 instruction set. */
#define bfd_mach_sparc_v9_p(mach) \
((mach) >= bfd_mach_sparc_v8plus && (mach) <= bfd_mach_sparc_v9b \
&& (mach) != bfd_mach_sparc_sparclite_le)
bfd_arch_mips, /* MIPS Rxxxx */
#define bfd_mach_mips3000 3000
#define bfd_mach_mips3900 3900
#define bfd_mach_mips4000 4000
#define bfd_mach_mips4010 4010
#define bfd_mach_mips4100 4100
#define bfd_mach_mips4300 4300
#define bfd_mach_mips4400 4400
#define bfd_mach_mips4600 4600
#define bfd_mach_mips4650 4650
#define bfd_mach_mips5000 5000
#define bfd_mach_mips6000 6000
#define bfd_mach_mips8000 8000
#define bfd_mach_mips10000 10000
#define bfd_mach_mips16 16
bfd_arch_i386, /* Intel 386 */
#define bfd_mach_i386_i386 0
#define bfd_mach_i386_i8086 1
#define bfd_mach_i386_i386_intel_syntax 2
#define bfd_mach_x86_64 3
#define bfd_mach_x86_64_intel_syntax 4
bfd_arch_we32k, /* AT&T WE32xxx */
bfd_arch_tahoe, /* CCI/Harris Tahoe */
bfd_arch_i860, /* Intel 860 */
bfd_arch_romp, /* IBM ROMP PC/RT */
bfd_arch_alliant, /* Alliant */
bfd_arch_convex, /* Convex */
bfd_arch_m88k, /* Motorola 88xxx */
bfd_arch_pyramid, /* Pyramid Technology */
bfd_arch_h8300, /* Hitachi H8/300 */
#define bfd_mach_h8300 1
#define bfd_mach_h8300h 2
#define bfd_mach_h8300s 3
bfd_arch_powerpc, /* PowerPC */
#define bfd_mach_ppc 0
#define bfd_mach_ppc64 1
#define bfd_mach_ppc_403 403
#define bfd_mach_ppc_403gc 4030
#define bfd_mach_ppc_e500 500
#define bfd_mach_ppc_505 505
#define bfd_mach_ppc_601 601
#define bfd_mach_ppc_602 602
#define bfd_mach_ppc_603 603
#define bfd_mach_ppc_ec603e 6031
#define bfd_mach_ppc_604 604
#define bfd_mach_ppc_620 620
#define bfd_mach_ppc_630 630
#define bfd_mach_ppc_750 750
#define bfd_mach_ppc_860 860
#define bfd_mach_ppc_a35 35
#define bfd_mach_ppc_rs64ii 642
#define bfd_mach_ppc_rs64iii 643
#define bfd_mach_ppc_7400 7400
bfd_arch_rs6000, /* IBM RS/6000 */
bfd_arch_hppa, /* HP PA RISC */
#define bfd_mach_hppa10 10
#define bfd_mach_hppa11 11
#define bfd_mach_hppa20 20
#define bfd_mach_hppa20w 25
bfd_arch_d10v, /* Mitsubishi D10V */
bfd_arch_z8k, /* Zilog Z8000 */
#define bfd_mach_z8001 1
#define bfd_mach_z8002 2
bfd_arch_h8500, /* Hitachi H8/500 */
bfd_arch_sh, /* Hitachi SH */
#define bfd_mach_sh 1
#define bfd_mach_sh2 0x20
#define bfd_mach_sh_dsp 0x2d
#define bfd_mach_sh2a 0x2a
#define bfd_mach_sh2a_nofpu 0x2b
#define bfd_mach_sh2e 0x2e
#define bfd_mach_sh3 0x30
#define bfd_mach_sh3_nommu 0x31
#define bfd_mach_sh3_dsp 0x3d
#define bfd_mach_sh3e 0x3e
#define bfd_mach_sh4 0x40
#define bfd_mach_sh4_nofpu 0x41
#define bfd_mach_sh4_nommu_nofpu 0x42
#define bfd_mach_sh4a 0x4a
#define bfd_mach_sh4a_nofpu 0x4b
#define bfd_mach_sh4al_dsp 0x4d
#define bfd_mach_sh5 0x50
bfd_arch_alpha, /* Dec Alpha */
#define bfd_mach_alpha 1
#define bfd_mach_alpha_ev4 0x10
#define bfd_mach_alpha_ev5 0x20
#define bfd_mach_alpha_ev6 0x30
bfd_arch_arm, /* Advanced Risc Machines ARM */
#define bfd_mach_arm_unknown 0
#define bfd_mach_arm_2 1
#define bfd_mach_arm_2a 2
#define bfd_mach_arm_3 3
#define bfd_mach_arm_3M 4
#define bfd_mach_arm_4 5
#define bfd_mach_arm_4T 6
#define bfd_mach_arm_5 7
#define bfd_mach_arm_5T 8
#define bfd_mach_arm_5TE 9
#define bfd_mach_arm_XScale 10
#define bfd_mach_arm_ep9312 11
#define bfd_mach_arm_iWMMXt 12
#define bfd_mach_arm_iWMMXt2 13
bfd_arch_ns32k, /* National Semiconductors ns32000 */
bfd_arch_w65, /* WDC 65816 */
bfd_arch_tic30, /* Texas Instruments TMS320C30 */
bfd_arch_v850, /* NEC V850 */
#define bfd_mach_v850 0
bfd_arch_arc, /* Argonaut RISC Core */
#define bfd_mach_arc_base 0
bfd_arch_m32r, /* Mitsubishi M32R/D */
#define bfd_mach_m32r 0 /* backwards compatibility */
bfd_arch_mn10200, /* Matsushita MN10200 */
bfd_arch_mn10300, /* Matsushita MN10300 */
bfd_arch_cris, /* Axis CRIS */
#define bfd_mach_cris_v0_v10 255
#define bfd_mach_cris_v32 32
#define bfd_mach_cris_v10_v32 1032
bfd_arch_microblaze, /* Xilinx MicroBlaze. */
bfd_arch_moxie, /* The Moxie core. */
bfd_arch_ia64, /* HP/Intel ia64 */
#define bfd_mach_ia64_elf64 64
#define bfd_mach_ia64_elf32 32
bfd_arch_nios2, /* Nios II */
#define bfd_mach_nios2 0
#define bfd_mach_nios2r1 1
#define bfd_mach_nios2r2 2
bfd_arch_lm32, /* Lattice Mico32 */
#define bfd_mach_lm32 1
bfd_arch_rx, /* Renesas RX */
#define bfd_mach_rx 0x75
#define bfd_mach_rx_v2 0x76
#define bfd_mach_rx_v3 0x77
bfd_arch_last
};
#define bfd_mach_s390_31 31
#define bfd_mach_s390_64 64
typedef struct symbol_cache_entry
{
const char *name;
union
{
PTR p;
bfd_vma i;
} udata;
} asymbol;
typedef int (*fprintf_function)(FILE *f, const char *fmt, ...)
GCC_FMT_ATTR(2, 3);
enum dis_insn_type {
dis_noninsn, /* Not a valid instruction */
dis_nonbranch, /* Not a branch instruction */
dis_branch, /* Unconditional branch */
dis_condbranch, /* Conditional branch */
dis_jsr, /* Jump to subroutine */
dis_condjsr, /* Conditional jump to subroutine */
dis_dref, /* Data reference instruction */
dis_dref2 /* Two data references in instruction */
};
/* This struct is passed into the instruction decoding routine,
and is passed back out into each callback. The various fields are used
for conveying information from your main routine into your callbacks,
for passing information into the instruction decoders (such as the
addresses of the callback functions), or for passing information
back from the instruction decoders to their callers.
It must be initialized before it is first passed; this can be done
by hand, or using one of the initialization macros below. */
typedef struct disassemble_info {
fprintf_function fprintf_func;
FILE *stream;
PTR application_data;
/* Target description. We could replace this with a pointer to the bfd,
but that would require one. There currently isn't any such requirement
so to avoid introducing one we record these explicitly. */
/* The bfd_flavour. This can be bfd_target_unknown_flavour. */
enum bfd_flavour flavour;
/* The bfd_arch value. */
enum bfd_architecture arch;
/* The bfd_mach value. */
unsigned long mach;
/* Endianness (for bi-endian cpus). Mono-endian cpus can ignore this. */
enum bfd_endian endian;
/* An array of pointers to symbols either at the location being disassembled
or at the start of the function being disassembled. The array is sorted
so that the first symbol is intended to be the one used. The others are
present for any misc. purposes. This is not set reliably, but if it is
not NULL, it is correct. */
asymbol **symbols;
/* Number of symbols in array. */
int num_symbols;
/* For use by the disassembler.
The top 16 bits are reserved for public use (and are documented here).
The bottom 16 bits are for the internal use of the disassembler. */
unsigned long flags;
#define INSN_HAS_RELOC 0x80000000
#define INSN_ARM_BE32 0x00010000
PTR private_data;
/* Function used to get bytes to disassemble. MEMADDR is the
address of the stuff to be disassembled, MYADDR is the address to
put the bytes in, and LENGTH is the number of bytes to read.
INFO is a pointer to this struct.
Returns an errno value or 0 for success. */
int (*read_memory_func)
(bfd_vma memaddr, bfd_byte *myaddr, int length,
struct disassemble_info *info);
/* Function which should be called if we get an error that we can't
recover from. STATUS is the errno value from read_memory_func and
MEMADDR is the address that we were trying to read. INFO is a
pointer to this struct. */
void (*memory_error_func)
(int status, bfd_vma memaddr, struct disassemble_info *info);
/* Function called to print ADDR. */
void (*print_address_func)
(bfd_vma addr, struct disassemble_info *info);
/* Function called to print an instruction. The function is architecture
* specific.
*/
int (*print_insn)(bfd_vma addr, struct disassemble_info *info);
/* Function called to determine if there is a symbol at the given ADDR.
If there is, the function returns 1, otherwise it returns 0.
This is used by ports which support an overlay manager where
the overlay number is held in the top part of an address. In
some circumstances we want to include the overlay number in the
address, (normally because there is a symbol associated with
that address), but sometimes we want to mask out the overlay bits. */
int (* symbol_at_address_func)
(bfd_vma addr, struct disassemble_info * info);
/* These are for buffer_read_memory. */
bfd_byte *buffer;
bfd_vma buffer_vma;
int buffer_length;
/* This variable may be set by the instruction decoder. It suggests
the number of bytes objdump should display on a single line. If
the instruction decoder sets this, it should always set it to
the same value in order to get reasonable looking output. */
int bytes_per_line;
/* the next two variables control the way objdump displays the raw data */
/* For example, if bytes_per_line is 8 and bytes_per_chunk is 4, the */
/* output will look like this:
00: 00000000 00000000
with the chunks displayed according to "display_endian". */
int bytes_per_chunk;
enum bfd_endian display_endian;
/* Results from instruction decoders. Not all decoders yet support
this information. This info is set each time an instruction is
decoded, and is only valid for the last such instruction.
To determine whether this decoder supports this information, set
insn_info_valid to 0, decode an instruction, then check it. */
char insn_info_valid; /* Branch info has been set. */
char branch_delay_insns; /* How many sequential insn's will run before
a branch takes effect. (0 = normal) */
char data_size; /* Size of data reference in insn, in bytes */
enum dis_insn_type insn_type; /* Type of instruction */
bfd_vma target; /* Target address of branch or dref, if known;
zero if unknown. */
bfd_vma target2; /* Second target address for dref2 */
/* Command line options specific to the target disassembler. */
char * disassembler_options;
/* Field intended to be used by targets in any way they deem suitable. */
int64_t target_info;
/* Options for Capstone disassembly. */
int cap_arch;
int cap_mode;
int cap_insn_unit;
int cap_insn_split;
} disassemble_info;
/* Standard disassemblers. Disassemble one instruction at the given
target address. Return number of bytes processed. */
typedef int (*disassembler_ftype) (bfd_vma, disassemble_info *);
int print_insn_tci(bfd_vma, disassemble_info*);
int print_insn_big_mips (bfd_vma, disassemble_info*);
int print_insn_little_mips (bfd_vma, disassemble_info*);
int print_insn_nanomips (bfd_vma, disassemble_info*);
int print_insn_i386 (bfd_vma, disassemble_info*);
int print_insn_m68k (bfd_vma, disassemble_info*);
int print_insn_z8001 (bfd_vma, disassemble_info*);
int print_insn_z8002 (bfd_vma, disassemble_info*);
int print_insn_h8300 (bfd_vma, disassemble_info*);
int print_insn_h8300h (bfd_vma, disassemble_info*);
int print_insn_h8300s (bfd_vma, disassemble_info*);
int print_insn_h8500 (bfd_vma, disassemble_info*);
int print_insn_arm_a64 (bfd_vma, disassemble_info*);
int print_insn_alpha (bfd_vma, disassemble_info*);
disassembler_ftype arc_get_disassembler (int, int);
int print_insn_arm (bfd_vma, disassemble_info*);
int print_insn_sparc (bfd_vma, disassemble_info*);
int print_insn_big_a29k (bfd_vma, disassemble_info*);
int print_insn_little_a29k (bfd_vma, disassemble_info*);
int print_insn_i960 (bfd_vma, disassemble_info*);
int print_insn_sh (bfd_vma, disassemble_info*);
int print_insn_shl (bfd_vma, disassemble_info*);
int print_insn_hppa (bfd_vma, disassemble_info*);
int print_insn_m32r (bfd_vma, disassemble_info*);
int print_insn_m88k (bfd_vma, disassemble_info*);
int print_insn_mn10200 (bfd_vma, disassemble_info*);
int print_insn_mn10300 (bfd_vma, disassemble_info*);
int print_insn_moxie (bfd_vma, disassemble_info*);
int print_insn_ns32k (bfd_vma, disassemble_info*);
int print_insn_big_powerpc (bfd_vma, disassemble_info*);
int print_insn_little_powerpc (bfd_vma, disassemble_info*);
int print_insn_rs6000 (bfd_vma, disassemble_info*);
int print_insn_w65 (bfd_vma, disassemble_info*);
int print_insn_d10v (bfd_vma, disassemble_info*);
int print_insn_v850 (bfd_vma, disassemble_info*);
int print_insn_tic30 (bfd_vma, disassemble_info*);
int print_insn_ppc (bfd_vma, disassemble_info*);
int print_insn_s390 (bfd_vma, disassemble_info*);
int print_insn_crisv32 (bfd_vma, disassemble_info*);
int print_insn_crisv10 (bfd_vma, disassemble_info*);
int print_insn_microblaze (bfd_vma, disassemble_info*);
int print_insn_ia64 (bfd_vma, disassemble_info*);
int print_insn_lm32 (bfd_vma, disassemble_info*);
int print_insn_big_nios2 (bfd_vma, disassemble_info*);
int print_insn_little_nios2 (bfd_vma, disassemble_info*);
int print_insn_xtensa (bfd_vma, disassemble_info*);
int print_insn_riscv32 (bfd_vma, disassemble_info*);
int print_insn_riscv64 (bfd_vma, disassemble_info*);
int print_insn_rx(bfd_vma, disassemble_info *);
#if 0
/* Fetch the disassembler for a given BFD, if that support is available. */
disassembler_ftype disassembler(bfd *);
#endif
/* This block of definitions is for particular callers who read instructions
into a buffer before calling the instruction decoder. */
/* Here is a function which callers may wish to use for read_memory_func.
It gets bytes from a buffer. */
int buffer_read_memory(bfd_vma, bfd_byte *, int, struct disassemble_info *);
/* This function goes with buffer_read_memory.
It prints a message using info->fprintf_func and info->stream. */
void perror_memory(int, bfd_vma, struct disassemble_info *);
/* Just print the address in hex. This is included for completeness even
though both GDB and objdump provide their own (to print symbolic
addresses). */
void generic_print_address(bfd_vma, struct disassemble_info *);
/* Always true. */
int generic_symbol_at_address(bfd_vma, struct disassemble_info *);
/* Macro to initialize a disassemble_info struct. This should be called
by all applications creating such a struct. */
#define INIT_DISASSEMBLE_INFO(INFO, STREAM, FPRINTF_FUNC) \
(INFO).flavour = bfd_target_unknown_flavour, \
(INFO).arch = bfd_arch_unknown, \
(INFO).mach = 0, \
(INFO).endian = BFD_ENDIAN_UNKNOWN, \
INIT_DISASSEMBLE_INFO_NO_ARCH(INFO, STREAM, FPRINTF_FUNC)
/* Call this macro to initialize only the internal variables for the
disassembler. Architecture dependent things such as byte order, or machine
variant are not touched by this macro. This makes things much easier for
GDB which must initialize these things separately. */
#define INIT_DISASSEMBLE_INFO_NO_ARCH(INFO, STREAM, FPRINTF_FUNC) \
(INFO).fprintf_func = (FPRINTF_FUNC), \
(INFO).stream = (STREAM), \
(INFO).symbols = NULL, \
(INFO).num_symbols = 0, \
(INFO).private_data = NULL, \
(INFO).buffer = NULL, \
(INFO).buffer_vma = 0, \
(INFO).buffer_length = 0, \
(INFO).read_memory_func = buffer_read_memory, \
(INFO).memory_error_func = perror_memory, \
(INFO).print_address_func = generic_print_address, \
(INFO).print_insn = NULL, \
(INFO).symbol_at_address_func = generic_symbol_at_address, \
(INFO).flags = 0, \
(INFO).bytes_per_line = 0, \
(INFO).bytes_per_chunk = 0, \
(INFO).display_endian = BFD_ENDIAN_UNKNOWN, \
(INFO).disassembler_options = NULL, \
(INFO).insn_info_valid = 0
#ifndef ATTRIBUTE_UNUSED
#define ATTRIBUTE_UNUSED __attribute__((unused))
#endif
/* from libbfd */
bfd_vma bfd_getl64 (const bfd_byte *addr);
bfd_vma bfd_getl32 (const bfd_byte *addr);
bfd_vma bfd_getb32 (const bfd_byte *addr);
bfd_vma bfd_getl16 (const bfd_byte *addr);
bfd_vma bfd_getb16 (const bfd_byte *addr);
typedef bool bfd_boolean;
#endif /* DISAS_DIS_ASM_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,35 +0,0 @@
/*
* Internal memory management interfaces
*
* Copyright 2011 Red Hat, Inc. and/or its affiliates
*
* Authors:
* Avi Kivity <avi@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
*/
#ifndef EXEC_MEMORY_H
#define EXEC_MEMORY_H
/*
* Internal interfaces between memory.c/exec.c/vl.c. Do not #include unless
* you're one of them.
*/
#include "exec/memory.h"
#ifndef CONFIG_USER_ONLY
/* Get the root memory region. This interface should only be used temporarily
* until a proper bus interface is available.
*/
MemoryRegion *get_system_memory(struct uc_struct *uc);
extern AddressSpace address_space_memory;
#endif
#endif

View File

@ -19,22 +19,29 @@
#ifndef CPU_ALL_H
#define CPU_ALL_H
#include "qemu-common.h"
#include "exec/cpu-common.h"
#include "exec/memory.h"
#include "qemu/thread.h"
#include "qom/cpu.h"
#include "hw/core/cpu.h"
#include <uc_priv.h>
#if 0
#include "qemu/rcu.h"
#endif
#define EXCP_INTERRUPT 0x10000 /* async interruption */
#define EXCP_HLT 0x10001 /* hlt instruction reached */
#define EXCP_DEBUG 0x10002 /* cpu stopped after a breakpoint or singlestep */
#define EXCP_HALTED 0x10003 /* cpu is halted (waiting for external event) */
#define EXCP_YIELD 0x10004 /* cpu wants to yield timeslice to another */
#define EXCP_ATOMIC 0x10005 /* stop-the-world and emulate atomic */
/* some important defines:
*
* WORDS_ALIGNED : if defined, the host cpu can only make word aligned
* memory accesses.
*
* HOST_WORDS_BIGENDIAN : if defined, the host cpu is big endian and
* otherwise little endian.
*
* (TARGET_WORDS_ALIGNED : same for target cpu (not supported yet))
*
* TARGET_WORDS_BIGENDIAN : same for target cpu
*/
@ -115,43 +122,9 @@ static inline void tswap64s(uint64_t *s)
#define bswaptls(s) bswap64s(s)
#endif
/* CPU memory access without any memory or io remapping */
/*
* the generic syntax for the memory accesses is:
*
* load: ld{type}{sign}{size}{endian}_{access_type}(ptr)
*
* store: st{type}{size}{endian}_{access_type}(ptr, val)
*
* type is:
* (empty): integer access
* f : float access
*
* sign is:
* (empty): for floats or 32 bit size
* u : unsigned
* s : signed
*
* size is:
* b: 8 bits
* w: 16 bits
* l: 32 bits
* q: 64 bits
*
* endian is:
* (empty): target cpu endianness or 8 bit access
* r : reversed target cpu endianness (not implemented yet)
* be : big endian (not implemented yet)
* le : little endian (not implemented yet)
*
* access_type is:
* raw : host memory access
* user : user mode access using soft MMU
* kernel : kernel mode access using soft MMU
/* Target-endianness CPU memory access functions. These fit into the
* {ld,st}{type}{sign}{size}{endian}_p naming scheme described in bswap.h.
*/
/* target-endianness CPU memory access functions */
#if defined(TARGET_WORDS_BIGENDIAN)
#define lduw_p(p) lduw_be_p(p)
#define ldsw_p(p) ldsw_be_p(p)
@ -164,6 +137,8 @@ static inline void tswap64s(uint64_t *s)
#define stq_p(p, v) stq_be_p(p, v)
#define stfl_p(p, v) stfl_be_p(p, v)
#define stfq_p(p, v) stfq_be_p(p, v)
#define ldn_p(p, sz) ldn_be_p(p, sz)
#define stn_p(p, sz, v) stn_be_p(p, sz, v)
#else
#define lduw_p(p) lduw_le_p(p)
#define ldsw_p(p) ldsw_le_p(p)
@ -176,39 +151,102 @@ static inline void tswap64s(uint64_t *s)
#define stq_p(p, v) stq_le_p(p, v)
#define stfl_p(p, v) stfl_le_p(p, v)
#define stfq_p(p, v) stfq_le_p(p, v)
#define ldn_p(p, sz) ldn_le_p(p, sz)
#define stn_p(p, sz, v) stn_le_p(p, sz, v)
#endif
/* MMU memory access macros */
#include "exec/hwaddr.h"
#if defined(CONFIG_USER_ONLY)
#include <assert.h>
#include "exec/user/abitypes.h"
/* On some host systems the guest address space is reserved on the host.
* This allows the guest address space to be offset to a convenient location.
*/
#if defined(CONFIG_USE_GUEST_BASE)
extern unsigned long guest_base;
extern int have_guest_base;
extern unsigned long reserved_va;
#define GUEST_BASE guest_base
#define RESERVED_VA reserved_va
#ifdef UNICORN_ARCH_POSTFIX
#define SUFFIX UNICORN_ARCH_POSTFIX
#else
#define GUEST_BASE 0ul
#define RESERVED_VA 0ul
#define SUFFIX
#endif
#define ARG1 as
#define ARG1_DECL AddressSpace *as
#define TARGET_ENDIANNESS
#include "exec/memory_ldst.inc.h"
#define GUEST_ADDR_MAX (RESERVED_VA ? RESERVED_VA : \
(1ul << TARGET_VIRT_ADDR_SPACE_BITS) - 1)
#ifdef UNICORN_ARCH_POSTFIX
#define SUFFIX glue(_cached_slow, UNICORN_ARCH_POSTFIX)
#else
#define SUFFIX _cached_slow
#endif
#define ARG1 cache
#define ARG1_DECL MemoryRegionCache *cache
#define TARGET_ENDIANNESS
#include "exec/memory_ldst.inc.h"
static inline void stl_phys_notdirty(AddressSpace *as, hwaddr addr, uint32_t val)
{
#ifdef UNICORN_ARCH_POSTFIX
glue(address_space_stl_notdirty, UNICORN_ARCH_POSTFIX)
(as->uc, as, addr, val, MEMTXATTRS_UNSPECIFIED, NULL);
#else
address_space_stl_notdirty(as->uc, as, addr, val,
MEMTXATTRS_UNSPECIFIED, NULL);
#endif
}
#ifdef UNICORN_ARCH_POSTFIX
#define SUFFIX UNICORN_ARCH_POSTFIX
#else
#define SUFFIX
#endif
#define ARG1 as
#define ARG1_DECL AddressSpace *as
#define TARGET_ENDIANNESS
#include "exec/memory_ldst_phys.inc.h"
/* Inline fast path for direct RAM access. */
#define ENDIANNESS
#include "exec/memory_ldst_cached.inc.h"
#ifdef UNICORN_ARCH_POSTFIX
#define SUFFIX glue(_cached, UNICORN_ARCH_POSTFIX)
#else
#define SUFFIX _cached
#endif
#define ARG1 cache
#define ARG1_DECL MemoryRegionCache *cache
#define TARGET_ENDIANNESS
#include "exec/memory_ldst_phys.inc.h"
/* page related stuff */
#define TARGET_PAGE_SIZE (1 << TARGET_PAGE_BITS)
#define TARGET_PAGE_MASK ~(TARGET_PAGE_SIZE - 1)
#define TARGET_PAGE_ALIGN(addr) (((addr) + TARGET_PAGE_SIZE - 1) & TARGET_PAGE_MASK)
#ifdef TARGET_PAGE_BITS_VARY
typedef struct TargetPageBits {
bool decided;
int bits;
target_long mask;
} TargetPageBits;
#if defined(CONFIG_ATTRIBUTE_ALIAS) || !defined(IN_EXEC_VARY)
extern const TargetPageBits target_page;
#else
extern TargetPageBits target_page;
#endif
#define HOST_PAGE_ALIGN(addr) (((addr) + qemu_host_page_size - 1) & qemu_host_page_mask)
#ifdef CONFIG_DEBUG_TCG
#define TARGET_PAGE_BITS ({ assert(target_page.decided); target_page.bits; })
#define TARGET_PAGE_MASK ({ assert(target_page.decided); target_page.mask; })
#else
#define TARGET_PAGE_BITS uc->init_target_page->bits
#define TARGET_PAGE_MASK uc->init_target_page->mask
#endif
#define TARGET_PAGE_SIZE (-(int)TARGET_PAGE_MASK) // qq
#else
#define TARGET_PAGE_BITS_MIN TARGET_PAGE_BITS
#define TARGET_PAGE_SIZE (1 << TARGET_PAGE_BITS)
#define TARGET_PAGE_MASK ((target_ulong)-1 << TARGET_PAGE_BITS)
#endif
#define TARGET_PAGE_ALIGN(addr) ROUND_UP((addr), TARGET_PAGE_SIZE)
#define HOST_PAGE_ALIGN(uc, addr) ROUND_UP((addr), uc->qemu_host_page_size)
#if 0
#define REAL_HOST_PAGE_ALIGN(addr) ROUND_UP((addr), uc->qemu_real_host_page_size)
#endif
/* same as PROT_xxx */
#define PAGE_READ 0x0001
@ -219,16 +257,9 @@ extern unsigned long reserved_va;
/* original state of the write flag (used when tracking self-modifying
code */
#define PAGE_WRITE_ORG 0x0010
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
/* FIXME: Code that sets/uses this is broken and needs to go away. */
#define PAGE_RESERVED 0x0020
#endif
#if defined(CONFIG_USER_ONLY)
//void page_dump(FILE *f);
int page_get_flags(target_ulong address);
#endif
/* Invalidate the TLB entry immediately, helpful for s390x
* Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs() */
#define PAGE_WRITE_INV 0x0040
CPUArchState *cpu_copy(CPUArchState *env);
@ -284,26 +315,131 @@ CPUArchState *cpu_copy(CPUArchState *env);
| CPU_INTERRUPT_TGT_EXT_3 \
| CPU_INTERRUPT_TGT_EXT_4)
#if !defined(CONFIG_USER_ONLY)
/* memory API */
/* Flags stored in the low bits of the TLB virtual address. These are
defined so that fast path ram access is all zeros. */
/*
* Flags stored in the low bits of the TLB virtual address.
* These are defined so that fast path ram access is all zeros.
* The flags all must be between TARGET_PAGE_BITS and
* maximum address alignment bit.
*
* Use TARGET_PAGE_BITS_MIN so that these bits are constant
* when TARGET_PAGE_BITS_VARY is in effect.
*/
/* Zero if TLB entry is valid. */
#define TLB_INVALID_MASK (1 << 3)
#define TLB_INVALID_MASK (1 << (TARGET_PAGE_BITS_MIN - 1))
/* Set if TLB entry references a clean RAM page. The iotlb entry will
contain the page physical address. */
#define TLB_NOTDIRTY (1 << 4)
#define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS_MIN - 2))
/* Set if TLB entry is an IO callback. */
#define TLB_MMIO (1 << 5)
#define TLB_MMIO (1 << (TARGET_PAGE_BITS_MIN - 3))
/* Set if TLB entry contains a watchpoint. */
#define TLB_WATCHPOINT (1 << (TARGET_PAGE_BITS_MIN - 4))
/* Set if TLB entry requires byte swap. */
#define TLB_BSWAP (1 << (TARGET_PAGE_BITS_MIN - 5))
/* Set if TLB entry writes ignored. */
#define TLB_DISCARD_WRITE (1 << (TARGET_PAGE_BITS_MIN - 6))
ram_addr_t last_ram_offset(struct uc_struct *uc);
void qemu_mutex_lock_ramlist(struct uc_struct *uc);
void qemu_mutex_unlock_ramlist(struct uc_struct *uc);
#endif /* !CONFIG_USER_ONLY */
/* Use this mask to check interception with an alignment mask
* in a TCG backend.
*/
#define TLB_FLAGS_MASK \
(TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
| TLB_WATCHPOINT | TLB_BSWAP | TLB_DISCARD_WRITE)
/**
* tlb_hit_page: return true if page aligned @addr is a hit against the
* TLB entry @tlb_addr
*
* @addr: virtual address to test (must be page aligned)
* @tlb_addr: TLB entry address (a CPUTLBEntry addr_read/write/code value)
*/
static inline bool tlb_hit_page(struct uc_struct *uc, target_ulong tlb_addr, target_ulong addr)
{
return addr == (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK));
}
/**
* tlb_hit: return true if @addr is a hit against the TLB entry @tlb_addr
*
* @addr: virtual address to test (need not be page aligned)
* @tlb_addr: TLB entry address (a CPUTLBEntry addr_read/write/code value)
*/
static inline bool tlb_hit(struct uc_struct *uc, target_ulong tlb_addr, target_ulong addr)
{
return tlb_hit_page(uc, tlb_addr, addr & TARGET_PAGE_MASK);
}
int cpu_memory_rw_debug(CPUState *cpu, target_ulong addr,
uint8_t *buf, int len, int is_write);
void *ptr, target_ulong len, bool is_write);
int cpu_exec(struct uc_struct *uc, CPUState *cpu);
/**
* cpu_set_cpustate_pointers(cpu)
* @cpu: The cpu object
*
* Set the generic pointers in CPUState into the outer object.
*/
static inline void cpu_set_cpustate_pointers(ArchCPU *cpu)
{
cpu->parent_obj.env_ptr = &cpu->env;
cpu->parent_obj.icount_decr_ptr = &cpu->neg.icount_decr;
}
/**
* env_archcpu(env)
* @env: The architecture environment
*
* Return the ArchCPU associated with the environment.
*/
static inline ArchCPU *env_archcpu(CPUArchState *env)
{
return container_of(env, ArchCPU, env);
}
/**
* env_cpu(env)
* @env: The architecture environment
*
* Return the CPUState associated with the environment.
*/
static inline CPUState *env_cpu(CPUArchState *env)
{
return &env_archcpu(env)->parent_obj;
}
/**
* env_neg(env)
* @env: The architecture environment
*
* Return the CPUNegativeOffsetState associated with the environment.
*/
static inline CPUNegativeOffsetState *env_neg(CPUArchState *env)
{
ArchCPU *arch_cpu = container_of(env, ArchCPU, env);
return &arch_cpu->neg;
}
/**
* cpu_neg(cpu)
* @cpu: The generic CPUState
*
* Return the CPUNegativeOffsetState associated with the cpu.
*/
static inline CPUNegativeOffsetState *cpu_neg(CPUState *cpu)
{
ArchCPU *arch_cpu = container_of(cpu, ArchCPU, parent_obj);
return &arch_cpu->neg;
}
/**
* env_tlb(env)
* @env: The architecture environment
*
* Return the CPUTLB state associated with the environment.
*/
static inline CPUTLB *env_tlb(CPUArchState *env)
{
return &env_neg(env)->tlb;
}
#endif /* CPU_ALL_H */

View File

@ -1,24 +1,16 @@
#ifndef CPU_COMMON_H
#define CPU_COMMON_H 1
#define CPU_COMMON_H
/* CPU interfaces that are target independent. */
struct uc_struct;
#ifndef CONFIG_USER_ONLY
#include "exec/hwaddr.h"
#endif
#include "qemu/bswap.h"
#include "qemu/queue.h"
/* The CPU list lock nests outside page_(un)lock or mmap_(un)lock */
void qemu_init_cpu_list(void);
void cpu_list_lock(void);
void cpu_list_unlock(void);
typedef enum MMUAccessType {
MMU_DATA_LOAD = 0,
MMU_DATA_STORE = 1,
MMU_INST_FETCH = 2
} MMUAccessType;
#if !defined(CONFIG_USER_ONLY)
void tcg_flush_softmmu_tlb(struct uc_struct *uc);
enum device_endian {
DEVICE_NATIVE_ENDIAN,
@ -26,95 +18,57 @@ enum device_endian {
DEVICE_LITTLE_ENDIAN,
};
/* address in the RAM (different from a physical address) */
#if defined(CONFIG_XEN_BACKEND)
typedef uint64_t ram_addr_t;
# define RAM_ADDR_MAX UINT64_MAX
# define RAM_ADDR_FMT "%" PRIx64
#if defined(HOST_WORDS_BIGENDIAN)
#define DEVICE_HOST_ENDIAN DEVICE_BIG_ENDIAN
#else
#define DEVICE_HOST_ENDIAN DEVICE_LITTLE_ENDIAN
#endif
/* address in the RAM (different from a physical address) */
typedef uintptr_t ram_addr_t;
# define RAM_ADDR_MAX UINTPTR_MAX
# define RAM_ADDR_FMT "%" PRIxPTR
#endif
extern ram_addr_t ram_size;
/* memory API */
typedef void CPUWriteMemoryFunc(void *opaque, hwaddr addr, uint32_t value);
typedef uint32_t CPUReadMemoryFunc(void *opaque, hwaddr addr);
void qemu_ram_remap(struct uc_struct *uc, ram_addr_t addr, ram_addr_t length);
/* This should not be used by devices. */
MemoryRegion *qemu_ram_addr_from_host(struct uc_struct* uc, void *ptr, ram_addr_t *ram_addr);
void qemu_ram_set_idstr(struct uc_struct *uc, ram_addr_t addr, const char *name, DeviceState *dev);
void qemu_ram_unset_idstr(struct uc_struct *uc, ram_addr_t addr);
ram_addr_t qemu_ram_addr_from_host(struct uc_struct *uc, void *ptr);
RAMBlock *qemu_ram_block_from_host(struct uc_struct *uc, void *ptr,
bool round_offset, ram_addr_t *offset);
ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host);
void *qemu_ram_get_host_addr(RAMBlock *rb);
ram_addr_t qemu_ram_get_offset(RAMBlock *rb);
ram_addr_t qemu_ram_get_used_length(RAMBlock *rb);
bool qemu_ram_is_shared(RAMBlock *rb);
bool cpu_physical_memory_rw(AddressSpace *as, hwaddr addr, uint8_t *buf,
int len, int is_write);
size_t qemu_ram_pagesize(RAMBlock *block);
size_t qemu_ram_pagesize_largest(void);
bool cpu_physical_memory_rw(AddressSpace *as, hwaddr addr, void *buf,
hwaddr len, bool is_write);
static inline void cpu_physical_memory_read(AddressSpace *as, hwaddr addr,
void *buf, int len)
void *buf, hwaddr len)
{
cpu_physical_memory_rw(as, addr, buf, len, 0);
cpu_physical_memory_rw(as, addr, buf, len, false);
}
static inline void cpu_physical_memory_write(AddressSpace *as, hwaddr addr,
const void *buf, int len)
const void *buf, hwaddr len)
{
cpu_physical_memory_rw(as, addr, (void *)buf, len, 1);
cpu_physical_memory_rw(as, addr, (void *)buf, len, true);
}
void *cpu_physical_memory_map(AddressSpace *as, hwaddr addr,
hwaddr *plen,
int is_write);
bool is_write);
void cpu_physical_memory_unmap(AddressSpace *as, void *buffer, hwaddr len,
int is_write, hwaddr access_len);
void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque));
bool is_write, hwaddr access_len);
bool cpu_physical_memory_is_io(AddressSpace *as, hwaddr phys_addr);
/* Coalesced MMIO regions are areas where write operations can be reordered.
* This usually implies that write operations are side-effect free. This allows
* batching which can make a major impact on performance when using
* virtualization.
*/
void qemu_flush_coalesced_mmio_buffer(void);
void cpu_flush_icache_range(AddressSpace *as, hwaddr start, hwaddr len);
uint32_t ldub_phys(AddressSpace *as, hwaddr addr);
uint32_t lduw_le_phys(AddressSpace *as, hwaddr addr);
uint32_t lduw_be_phys(AddressSpace *as, hwaddr addr);
uint32_t ldl_le_phys(AddressSpace *as, hwaddr addr);
uint32_t ldl_be_phys(AddressSpace *as, hwaddr addr);
uint64_t ldq_le_phys(AddressSpace *as, hwaddr addr);
uint64_t ldq_be_phys(AddressSpace *as, hwaddr addr);
void stb_phys(AddressSpace *as, hwaddr addr, uint32_t val);
void stw_le_phys(AddressSpace *as, hwaddr addr, uint32_t val);
void stw_be_phys(AddressSpace *as, hwaddr addr, uint32_t val);
void stl_le_phys(AddressSpace *as, hwaddr addr, uint32_t val);
void stl_be_phys(AddressSpace *as, hwaddr addr, uint32_t val);
void stq_le_phys(AddressSpace *as, hwaddr addr, uint64_t val);
void stq_be_phys(AddressSpace *as, hwaddr addr, uint64_t val);
int ram_block_discard_range(struct uc_struct *uc, RAMBlock *rb, uint64_t start, size_t length);
#ifdef NEED_CPU_H
uint32_t lduw_phys(AddressSpace *as, hwaddr addr);
uint32_t ldl_phys(AddressSpace *as, hwaddr addr);
uint64_t ldq_phys(AddressSpace *as, hwaddr addr);
void stl_phys_notdirty(AddressSpace *as, hwaddr addr, uint32_t val);
void stw_phys(AddressSpace *as, hwaddr addr, uint32_t val);
void stl_phys(AddressSpace *as, hwaddr addr, uint32_t val);
void stq_phys(AddressSpace *as, hwaddr addr, uint64_t val);
#endif
void cpu_physical_memory_write_rom(AddressSpace *as, hwaddr addr,
const uint8_t *buf, int len);
void cpu_flush_icache_range(AddressSpace *as, hwaddr start, int len);
extern struct MemoryRegion io_mem_rom;
extern struct MemoryRegion io_mem_notdirty;
typedef void (RAMBlockIterFunc)(void *host_addr,
ram_addr_t offset, ram_addr_t length, void *opaque);
void qemu_ram_foreach_block(struct uc_struct *uc, RAMBlockIterFunc func, void *opaque);
#endif
#endif /* !CPU_COMMON_H */
#endif /* CPU_COMMON_H */

View File

@ -23,16 +23,35 @@
#error cpu.h included from common code
#endif
#include "config.h"
#include "unicorn/platform.h"
#include "qemu/osdep.h"
#include "qemu/queue.h"
#ifndef CONFIG_USER_ONLY
#include "qemu/host-utils.h"
#include "qemu/thread.h"
#include "tcg-target.h"
#include "exec/hwaddr.h"
#endif
#include "exec/memattrs.h"
#include "hw/core/cpu.h"
#include "cpu-param.h"
#ifndef TARGET_LONG_BITS
#error TARGET_LONG_BITS must be defined before including this header
# error TARGET_LONG_BITS must be defined in cpu-param.h
#endif
#ifndef NB_MMU_MODES
# error NB_MMU_MODES must be defined in cpu-param.h
#endif
#ifndef TARGET_PHYS_ADDR_SPACE_BITS
# error TARGET_PHYS_ADDR_SPACE_BITS must be defined in cpu-param.h
#endif
#ifndef TARGET_VIRT_ADDR_SPACE_BITS
# error TARGET_VIRT_ADDR_SPACE_BITS must be defined in cpu-param.h
#endif
#ifndef TARGET_PAGE_BITS
# ifdef TARGET_PAGE_BITS_VARY
# ifndef TARGET_PAGE_BITS_MIN
# error TARGET_PAGE_BITS_MIN must be defined in cpu-param.h
# endif
# else
# error TARGET_PAGE_BITS must be defined in cpu-param.h
# endif
#endif
#define TARGET_LONG_SIZE (TARGET_LONG_BITS / 8)
@ -54,23 +73,6 @@ typedef uint64_t target_ulong;
#error TARGET_LONG_SIZE undefined
#endif
#define EXCP_INTERRUPT 0x10000 /* async interruption */
#define EXCP_HLT 0x10001 /* hlt instruction reached */
#define EXCP_DEBUG 0x10002 /* cpu stopped after a breakpoint or singlestep */
#define EXCP_HALTED 0x10003 /* cpu is halted (waiting for external event) */
#define EXCP_YIELD 0x10004 /* cpu wants to yield timeslice to another */
/* Only the bottom TB_JMP_PAGE_BITS of the jump cache hash bits vary for
addresses on the same page. The top bits are the same. This allows
TLB invalidation to quickly clear a subset of the hash table. */
#define TB_JMP_PAGE_BITS (TB_JMP_CACHE_BITS / 2)
#define TB_JMP_PAGE_SIZE (1 << TB_JMP_PAGE_BITS)
#define TB_JMP_ADDR_MASK (TB_JMP_PAGE_SIZE - 1)
#define TB_JMP_PAGE_MASK (TB_JMP_CACHE_SIZE - TB_JMP_PAGE_SIZE)
#if !defined(CONFIG_USER_ONLY)
#define CPU_TLB_BITS 8
#define CPU_TLB_SIZE (1 << CPU_TLB_BITS)
/* use a fully associative victim tlb of 8 entries */
#define CPU_VTLB_SIZE 8
@ -80,6 +82,24 @@ typedef uint64_t target_ulong;
#define CPU_TLB_ENTRY_BITS 5
#endif
#define CPU_TLB_DYN_MIN_BITS 6
#define CPU_TLB_DYN_DEFAULT_BITS 8
# if HOST_LONG_BITS == 32
/* Make sure we do not require a double-word shift for the TLB load */
# define CPU_TLB_DYN_MAX_BITS (32 - TARGET_PAGE_BITS)
# else /* HOST_LONG_BITS == 64 */
/*
* Assuming TARGET_PAGE_BITS==12, with 2**22 entries we can cover 2**(22+12) ==
* 2**34 == 16G of address space. This is roughly what one would expect a
* TLB to cover in a modern (as of 2018) x86_64 CPU. For instance, Intel
* Skylake's Level-2 STLB has 16 1G entries.
* Also, make sure we do not size the TLB past the guest's address space.
*/
# define CPU_TLB_DYN_MAX_BITS \
MIN(22, TARGET_VIRT_ADDR_SPACE_BITS - TARGET_PAGE_BITS)
# endif
typedef struct CPUTLBEntry {
/* bit TARGET_LONG_BITS to TARGET_PAGE_BITS : virtual address
bit TARGET_PAGE_BITS-1..4 : Nonzero for accesses that should not
@ -87,65 +107,123 @@ typedef struct CPUTLBEntry {
bit 3 : indicates that the entry is invalid
bit 2..0 : zero
*/
target_ulong addr_read;
target_ulong addr_write;
target_ulong addr_code;
/* Addend to virtual address to get host address. IO accesses
use the corresponding iotlb value. */
uintptr_t addend;
/* padding to get a power of two size */
#ifdef _MSC_VER
# define TARGET_ULONG_SIZE (TARGET_LONG_BITS/8)
# ifdef _WIN64
# define UINTPTR_SIZE 8
# else
# define UINTPTR_SIZE 4
# endif
#define DUMMY_SIZE (1 << CPU_TLB_ENTRY_BITS) - \
(TARGET_ULONG_SIZE * 3 + \
((-TARGET_ULONG_SIZE * 3) & (UINTPTR_SIZE - 1)) + \
UINTPTR_SIZE)
#if DUMMY_SIZE > 0
uint8_t dummy[DUMMY_SIZE];
#endif
#else // _MSC_VER
uint8_t dummy[(1 << CPU_TLB_ENTRY_BITS) -
(sizeof(target_ulong) * 3 +
((-sizeof(target_ulong) * 3) & (sizeof(uintptr_t) - 1)) +
sizeof(uintptr_t))];
#endif // _MSC_VER
union {
struct {
target_ulong addr_read;
target_ulong addr_write;
target_ulong addr_code;
/* Addend to virtual address to get host address. IO accesses
use the corresponding iotlb value. */
uintptr_t addend;
};
/* padding to get a power of two size */
uint8_t dummy[1 << CPU_TLB_ENTRY_BITS];
};
} CPUTLBEntry;
QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS));
#define CPU_COMMON_TLB \
/* The meaning of the MMU modes is defined in the target code. */ \
CPUTLBEntry tlb_table[NB_MMU_MODES][CPU_TLB_SIZE]; \
CPUTLBEntry tlb_v_table[NB_MMU_MODES][CPU_VTLB_SIZE]; \
hwaddr iotlb[NB_MMU_MODES][CPU_TLB_SIZE]; \
hwaddr iotlb_v[NB_MMU_MODES][CPU_VTLB_SIZE]; \
target_ulong tlb_flush_addr; \
target_ulong tlb_flush_mask; \
target_ulong vtlb_index; \
/* The IOTLB is not accessed directly inline by generated TCG code,
* so the CPUIOTLBEntry layout is not as critical as that of the
* CPUTLBEntry. (This is also why we don't want to combine the two
* structs into one.)
*/
typedef struct CPUIOTLBEntry {
/*
* @addr contains:
* - in the lower TARGET_PAGE_BITS, a physical section number
* - with the lower TARGET_PAGE_BITS masked off, an offset which
* must be added to the virtual address to obtain:
* + the ram_addr_t of the target RAM (if the physical section
* number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
* + the offset within the target MemoryRegion (otherwise)
*/
hwaddr addr;
MemTxAttrs attrs;
} CPUIOTLBEntry;
#else
/*
* Data elements that are per MMU mode, minus the bits accessed by
* the TCG fast path.
*/
typedef struct CPUTLBDesc {
/*
* Describe a region covering all of the large pages allocated
* into the tlb. When any page within this region is flushed,
* we must flush the entire tlb. The region is matched if
* (addr & large_page_mask) == large_page_addr.
*/
target_ulong large_page_addr;
target_ulong large_page_mask;
/* host time (in ns) at the beginning of the time window */
int64_t window_begin_ns;
/* maximum number of entries observed in the window */
size_t window_max_entries;
size_t n_used_entries;
/* The next index to use in the tlb victim table. */
size_t vindex;
/* The tlb victim table, in two parts. */
CPUTLBEntry vtable[CPU_VTLB_SIZE];
CPUIOTLBEntry viotlb[CPU_VTLB_SIZE];
/* The iotlb. */
CPUIOTLBEntry *iotlb;
} CPUTLBDesc;
#define CPU_COMMON_TLB
/*
* Data elements that are per MMU mode, accessed by the fast path.
* The structure is aligned to aid loading the pair with one insn.
*/
typedef struct CPUTLBDescFast {
/* Contains (n_entries - 1) << CPU_TLB_ENTRY_BITS */
uintptr_t mask;
/* The array of tlb entries itself. */
CPUTLBEntry *table;
} CPUTLBDescFast QEMU_ALIGNED(2 * sizeof(void *));
/*
* Data elements that are shared between all MMU modes.
*/
typedef struct CPUTLBCommon {
/*
* Within dirty, for each bit N, modifications have been made to
* mmu_idx N since the last time that mmu_idx was flushed.
* Protected by tlb_c.lock.
*/
uint16_t dirty;
/*
* Statistics. These are not lock protected, but are read and
* written atomically. This allows the monitor to print a snapshot
* of the stats without interfering with the cpu.
*/
size_t full_flush_count;
size_t part_flush_count;
size_t elide_flush_count;
} CPUTLBCommon;
/*
* The entire softmmu tlb, for all MMU modes.
* The meaning of each of the MMU modes is defined in the target code.
* Since this is placed within CPUNegativeOffsetState, the smallest
* negative offsets are at the end of the struct.
*/
typedef struct CPUTLB {
CPUTLBCommon c;
CPUTLBDesc d[NB_MMU_MODES];
CPUTLBDescFast f[NB_MMU_MODES];
} CPUTLB;
/* This will be used by TCG backends to compute offsets. */
#define TLB_MASK_TABLE_OFS(IDX) \
((int)offsetof(ArchCPU, neg.tlb.f[IDX]) - (int)offsetof(ArchCPU, env))
/*
* This structure must be placed in ArchCPU immediately
* before CPUArchState, as a field named "neg".
*/
typedef struct CPUNegativeOffsetState {
CPUTLB tlb;
IcountDecr icount_decr;
} CPUNegativeOffsetState;
#endif
#define CPU_TEMP_BUF_NLONGS 128
// Unicorn engine
// @invalid_addr: invalid memory access address
// @invalid_error: error code for memory access (1 = READ, 2 = WRITE)
#define CPU_COMMON \
/* soft mmu support */ \
CPU_COMMON_TLB \
uint64_t invalid_addr; \
int invalid_error;
#endif

View File

@ -23,325 +23,134 @@
*
* Used by target op helpers.
*
* MMU mode suffixes are defined in target cpu.h.
* The syntax for the accessors is:
*
* load: cpu_ld{sign}{size}_{mmusuffix}(env, ptr)
* cpu_ld{sign}{size}_{mmusuffix}_ra(env, ptr, retaddr)
* cpu_ld{sign}{size}_mmuidx_ra(env, ptr, mmu_idx, retaddr)
*
* store: cpu_st{size}_{mmusuffix}(env, ptr, val)
* cpu_st{size}_{mmusuffix}_ra(env, ptr, val, retaddr)
* cpu_st{size}_mmuidx_ra(env, ptr, val, mmu_idx, retaddr)
*
* sign is:
* (empty): for 32 and 64 bit sizes
* u : unsigned
* s : signed
*
* size is:
* b: 8 bits
* w: 16 bits
* l: 32 bits
* q: 64 bits
*
* mmusuffix is one of the generic suffixes "data" or "code", or "mmuidx".
* The "mmuidx" suffix carries an extra mmu_idx argument that specifies
* the index to use; the "data" and "code" suffixes take the index from
* cpu_mmu_index().
*/
#ifndef CPU_LDST_H
#define CPU_LDST_H
#if defined(CONFIG_USER_ONLY)
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
#define g2h(x) ((void *)((unsigned long)(target_ulong)(x) + GUEST_BASE))
#include "cpu-defs.h"
#include "cpu.h"
#if HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS
#define h2g_valid(x) 1
#else
#define h2g_valid(x) ({ \
unsigned long __guest = (unsigned long)(x) - GUEST_BASE; \
(__guest < (1ul << TARGET_VIRT_ADDR_SPACE_BITS)) && \
(!RESERVED_VA || (__guest < RESERVED_VA)); \
})
typedef target_ulong abi_ptr;
#define TARGET_ABI_FMT_ptr TARGET_ABI_FMT_lx
uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr);
uint32_t cpu_lduw_data(CPUArchState *env, abi_ptr ptr);
uint32_t cpu_ldl_data(CPUArchState *env, abi_ptr ptr);
uint64_t cpu_ldq_data(CPUArchState *env, abi_ptr ptr);
int cpu_ldsb_data(CPUArchState *env, abi_ptr ptr);
int cpu_ldsw_data(CPUArchState *env, abi_ptr ptr);
uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
uint32_t cpu_lduw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
uint32_t cpu_ldl_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
uint64_t cpu_ldq_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
int cpu_ldsw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
void cpu_stb_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
void cpu_stw_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
void cpu_stl_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
void cpu_stq_data(CPUArchState *env, abi_ptr ptr, uint64_t val);
void cpu_stb_data_ra(CPUArchState *env, abi_ptr ptr,
uint32_t val, uintptr_t retaddr);
void cpu_stw_data_ra(CPUArchState *env, abi_ptr ptr,
uint32_t val, uintptr_t retaddr);
void cpu_stl_data_ra(CPUArchState *env, abi_ptr ptr,
uint32_t val, uintptr_t retaddr);
void cpu_stq_data_ra(CPUArchState *env, abi_ptr ptr,
uint64_t val, uintptr_t retaddr);
/* Needed for TCG_OVERSIZED_GUEST */
#include "tcg/tcg.h"
static inline target_ulong tlb_addr_write(const CPUTLBEntry *entry)
{
return entry->addr_write;
}
/* Find the TLB index corresponding to the mmu_idx + address pair. */
static inline uintptr_t tlb_index(CPUArchState *env, uintptr_t mmu_idx,
target_ulong addr)
{
#ifdef TARGET_ARM
struct uc_struct *uc = env->uc;
#endif
uintptr_t size_mask = env_tlb(env)->f[mmu_idx].mask >> CPU_TLB_ENTRY_BITS;
#define h2g_nocheck(x) ({ \
unsigned long __ret = (unsigned long)(x) - GUEST_BASE; \
(abi_ulong)__ret; \
})
return (addr >> TARGET_PAGE_BITS) & size_mask;
}
#define h2g(x) ({ \
/* Check if given address fits target address space */ \
assert(h2g_valid(x)); \
h2g_nocheck(x); \
})
/* Find the TLB entry corresponding to the mmu_idx + address pair. */
static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx,
target_ulong addr)
{
return &env_tlb(env)->f[mmu_idx].table[tlb_index(env, mmu_idx, addr)];
}
#define saddr(x) g2h(x)
#define laddr(x) g2h(x)
uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra);
uint32_t cpu_lduw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra);
uint32_t cpu_ldl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra);
uint64_t cpu_ldq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra);
#else /* !CONFIG_USER_ONLY */
/* NOTE: we use double casts if pointers and target_ulong have
different sizes */
#define saddr(x) (uint8_t *)(intptr_t)(x)
#define laddr(x) (uint8_t *)(intptr_t)(x)
#endif
int cpu_ldsb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra);
int cpu_ldsw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
int mmu_idx, uintptr_t ra);
#define ldub_raw(p) ldub_p(laddr((p)))
#define ldsb_raw(p) ldsb_p(laddr((p)))
#define lduw_raw(p) lduw_p(laddr((p)))
#define ldsw_raw(p) ldsw_p(laddr((p)))
#define ldl_raw(p) ldl_p(laddr((p)))
#define ldq_raw(p) ldq_p(laddr((p)))
#define ldfl_raw(p) ldfl_p(laddr((p)))
#define ldfq_raw(p) ldfq_p(laddr((p)))
#define stb_raw(p, v) stb_p(saddr((p)), v)
#define stw_raw(p, v) stw_p(saddr((p)), v)
#define stl_raw(p, v) stl_p(saddr((p)), v)
#define stq_raw(p, v) stq_p(saddr((p)), v)
#define stfl_raw(p, v) stfl_p(saddr((p)), v)
#define stfq_raw(p, v) stfq_p(saddr((p)), v)
void cpu_stb_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
int mmu_idx, uintptr_t retaddr);
void cpu_stw_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
int mmu_idx, uintptr_t retaddr);
void cpu_stl_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
int mmu_idx, uintptr_t retaddr);
void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
int mmu_idx, uintptr_t retaddr);
#if defined(CONFIG_USER_ONLY)
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr);
uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr);
uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr);
uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr);
/* if user mode, no other memory access functions */
#define ldub(p) ldub_raw(p)
#define ldsb(p) ldsb_raw(p)
#define lduw(p) lduw_raw(p)
#define ldsw(p) ldsw_raw(p)
#define ldl(p) ldl_raw(p)
#define ldq(p) ldq_raw(p)
#define ldfl(p) ldfl_raw(p)
#define ldfq(p) ldfq_raw(p)
#define stb(p, v) stb_raw(p, v)
#define stw(p, v) stw_raw(p, v)
#define stl(p, v) stl_raw(p, v)
#define stq(p, v) stq_raw(p, v)
#define stfl(p, v) stfl_raw(p, v)
#define stfq(p, v) stfq_raw(p, v)
static inline int cpu_ldsb_code(CPUArchState *env, abi_ptr addr)
{
return (int8_t)cpu_ldub_code(env, addr);
}
#define cpu_ldub_code(env1, p) ldub_raw(p)
#define cpu_ldsb_code(env1, p) ldsb_raw(p)
#define cpu_lduw_code(env1, p) lduw_raw(p)
#define cpu_ldsw_code(env1, p) ldsw_raw(p)
#define cpu_ldl_code(env1, p) ldl_raw(p)
#define cpu_ldq_code(env1, p) ldq_raw(p)
#define cpu_ldub_data(env, addr) ldub_raw(addr)
#define cpu_lduw_data(env, addr) lduw_raw(addr)
#define cpu_ldsw_data(env, addr) ldsw_raw(addr)
#define cpu_ldl_data(env, addr) ldl_raw(addr)
#define cpu_ldq_data(env, addr) ldq_raw(addr)
#define cpu_stb_data(env, addr, data) stb_raw(addr, data)
#define cpu_stw_data(env, addr, data) stw_raw(addr, data)
#define cpu_stl_data(env, addr, data) stl_raw(addr, data)
#define cpu_stq_data(env, addr, data) stq_raw(addr, data)
#define cpu_ldub_kernel(env, addr) ldub_raw(addr)
#define cpu_lduw_kernel(env, addr) lduw_raw(addr)
#define cpu_ldsw_kernel(env, addr) ldsw_raw(addr)
#define cpu_ldl_kernel(env, addr) ldl_raw(addr)
#define cpu_ldq_kernel(env, addr) ldq_raw(addr)
#define cpu_stb_kernel(env, addr, data) stb_raw(addr, data)
#define cpu_stw_kernel(env, addr, data) stw_raw(addr, data)
#define cpu_stl_kernel(env, addr, data) stl_raw(addr, data)
#define cpu_stq_kernel(env, addr, data) stq_raw(addr, data)
#define ldub_kernel(p) ldub_raw(p)
#define ldsb_kernel(p) ldsb_raw(p)
#define lduw_kernel(p) lduw_raw(p)
#define ldsw_kernel(p) ldsw_raw(p)
#define ldl_kernel(p) ldl_raw(p)
#define ldq_kernel(p) ldq_raw(p)
#define ldfl_kernel(p) ldfl_raw(p)
#define ldfq_kernel(p) ldfq_raw(p)
#define stb_kernel(p, v) stb_raw(p, v)
#define stw_kernel(p, v) stw_raw(p, v)
#define stl_kernel(p, v) stl_raw(p, v)
#define stq_kernel(p, v) stq_raw(p, v)
#define stfl_kernel(p, v) stfl_raw(p, v)
#define stfq_kernel(p, vt) stfq_raw(p, v)
#define cpu_ldub_data(env, addr) ldub_raw(addr)
#define cpu_lduw_data(env, addr) lduw_raw(addr)
#define cpu_ldl_data(env, addr) ldl_raw(addr)
#define cpu_stb_data(env, addr, data) stb_raw(addr, data)
#define cpu_stw_data(env, addr, data) stw_raw(addr, data)
#define cpu_stl_data(env, addr, data) stl_raw(addr, data)
#else
/* XXX: find something cleaner.
* Furthermore, this is false for 64 bits targets
*/
#define ldul_user ldl_user
#define ldul_kernel ldl_kernel
#define ldul_hypv ldl_hypv
#define ldul_executive ldl_executive
#define ldul_supervisor ldl_supervisor
/* The memory helpers for tcg-generated code need tcg_target_long etc. */
#include "tcg.h"
uint8_t helper_ldb_mmu(CPUArchState *env, target_ulong addr, int mmu_idx);
uint16_t helper_ldw_mmu(CPUArchState *env, target_ulong addr, int mmu_idx);
uint32_t helper_ldl_mmu(CPUArchState *env, target_ulong addr, int mmu_idx);
uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, int mmu_idx);
void helper_stb_mmu(CPUArchState *env, target_ulong addr,
uint8_t val, int mmu_idx);
void helper_stw_mmu(CPUArchState *env, target_ulong addr,
uint16_t val, int mmu_idx);
void helper_stl_mmu(CPUArchState *env, target_ulong addr,
uint32_t val, int mmu_idx);
void helper_stq_mmu(CPUArchState *env, target_ulong addr,
uint64_t val, int mmu_idx);
uint8_t helper_ldb_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
uint16_t helper_ldw_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
uint32_t helper_ldl_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
#define CPU_MMU_INDEX 0
#define MEMSUFFIX MMU_MODE0_SUFFIX
#define DATA_SIZE 1
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#define CPU_MMU_INDEX 1
#define MEMSUFFIX MMU_MODE1_SUFFIX
#define DATA_SIZE 1
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#if (NB_MMU_MODES >= 3)
#define CPU_MMU_INDEX 2
#define MEMSUFFIX MMU_MODE2_SUFFIX
#define DATA_SIZE 1
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#endif /* (NB_MMU_MODES >= 3) */
#if (NB_MMU_MODES >= 4)
#define CPU_MMU_INDEX 3
#define MEMSUFFIX MMU_MODE3_SUFFIX
#define DATA_SIZE 1
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#endif /* (NB_MMU_MODES >= 4) */
#if (NB_MMU_MODES >= 5)
#define CPU_MMU_INDEX 4
#define MEMSUFFIX MMU_MODE4_SUFFIX
#define DATA_SIZE 1
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#endif /* (NB_MMU_MODES >= 5) */
#if (NB_MMU_MODES >= 6)
#define CPU_MMU_INDEX 5
#define MEMSUFFIX MMU_MODE5_SUFFIX
#define DATA_SIZE 1
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#endif /* (NB_MMU_MODES >= 6) */
#if (NB_MMU_MODES > 6)
#error "NB_MMU_MODES > 6 is not supported for now"
#endif /* (NB_MMU_MODES > 6) */
/* these access are slower, they must be as rare as possible */
#define CPU_MMU_INDEX (cpu_mmu_index(env))
#define MEMSUFFIX _data
#define DATA_SIZE 1
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#define ldub(p) ldub_data(p)
#define ldsb(p) ldsb_data(p)
#define lduw(p) lduw_data(p)
#define ldsw(p) ldsw_data(p)
#define ldl(p) ldl_data(p)
#define ldq(p) ldq_data(p)
#define stb(p, v) stb_data(p, v)
#define stw(p, v) stw_data(p, v)
#define stl(p, v) stl_data(p, v)
#define stq(p, v) stq_data(p, v)
#define CPU_MMU_INDEX (cpu_mmu_index(env))
#define MEMSUFFIX _code
#define SOFTMMU_CODE_ACCESS
#define DATA_SIZE 1
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#undef SOFTMMU_CODE_ACCESS
static inline int cpu_ldsw_code(CPUArchState *env, abi_ptr addr)
{
return (int16_t)cpu_lduw_code(env, addr);
}
/**
* tlb_vaddr_to_host:
@ -351,50 +160,12 @@ uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
* @mmu_idx: MMU index to use for lookup
*
* Look up the specified guest virtual index in the TCG softmmu TLB.
* If the TLB contains a host virtual address suitable for direct RAM
* access, then return it. Otherwise (TLB miss, TLB entry is for an
* I/O access, etc) return NULL.
*
* This is the equivalent of the initial fast-path code used by
* TCG backends for guest load and store accesses.
* If we can translate a host virtual address suitable for direct RAM
* access, without causing a guest exception, then return it.
* Otherwise (TLB entry is for an I/O access, guest software
* TLB fill required, etc) return NULL.
*/
static inline void *tlb_vaddr_to_host(CPUArchState *env, target_ulong addr,
int access_type, int mmu_idx)
{
int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
CPUTLBEntry *tlbentry = &env->tlb_table[mmu_idx][index];
target_ulong tlb_addr;
uintptr_t haddr;
switch (access_type) {
case 0:
tlb_addr = tlbentry->addr_read;
break;
case 1:
tlb_addr = tlbentry->addr_write;
break;
case 2:
tlb_addr = tlbentry->addr_code;
break;
default:
g_assert_not_reached();
}
if ((addr & TARGET_PAGE_MASK)
!= (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
/* TLB entry is for a different page */
return NULL;
}
if (tlb_addr & ~TARGET_PAGE_MASK) {
/* IO access */
return NULL;
}
haddr = (uintptr_t)(addr + env->tlb_table[mmu_idx][index].addend);
return (void *)haddr;
}
#endif /* defined(CONFIG_USER_ONLY) */
void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
MMUAccessType access_type, int mmu_idx);
#endif /* CPU_LDST_H */

View File

@ -1,193 +0,0 @@
/*
* Software MMU support
*
* Generate inline load/store functions for one MMU mode and data
* size.
*
* Generate a store function as well as signed and unsigned loads. For
* 32 and 64 bit cases, also generate floating point functions with
* the same size.
*
* Not used directly but included from cpu_ldst.h.
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#if DATA_SIZE == 8
#define SUFFIX q
#define USUFFIX q
#define DATA_TYPE uint64_t
#elif DATA_SIZE == 4
#define SUFFIX l
#define USUFFIX l
#define DATA_TYPE uint32_t
#elif DATA_SIZE == 2
#define SUFFIX w
#define USUFFIX uw
#define DATA_TYPE uint16_t
#define DATA_STYPE int16_t
#elif DATA_SIZE == 1
#define SUFFIX b
#define USUFFIX ub
#define DATA_TYPE uint8_t
#define DATA_STYPE int8_t
#else
#error unsupported data size
#endif
#if DATA_SIZE == 8
#define RES_TYPE uint64_t
#else
#define RES_TYPE uint32_t
#endif
#ifdef SOFTMMU_CODE_ACCESS
#define ADDR_READ addr_code
#define MMUSUFFIX _cmmu
#else
#define ADDR_READ addr_read
#define MMUSUFFIX _mmu
#endif
/* generic load/store macros */
static inline RES_TYPE
glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
{
int page_index;
RES_TYPE res;
target_ulong addr;
int mmu_idx;
addr = ptr;
page_index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
mmu_idx = CPU_MMU_INDEX;
if (unlikely(env->tlb_table[mmu_idx][page_index].ADDR_READ !=
(addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
res = glue(glue(helper_ld, SUFFIX), MMUSUFFIX)(env, addr, mmu_idx);
} else {
uintptr_t hostaddr = (uintptr_t)(addr + env->tlb_table[mmu_idx][page_index].addend);
res = glue(glue(ld, USUFFIX), _raw)(hostaddr);
}
return res;
}
#if DATA_SIZE <= 2
static inline int
glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
{
int res, page_index;
target_ulong addr;
int mmu_idx;
addr = ptr;
page_index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
mmu_idx = CPU_MMU_INDEX;
if (unlikely(env->tlb_table[mmu_idx][page_index].ADDR_READ !=
(addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
res = (DATA_STYPE)glue(glue(helper_ld, SUFFIX),
MMUSUFFIX)(env, addr, mmu_idx);
} else {
uintptr_t hostaddr = (uintptr_t)(addr + env->tlb_table[mmu_idx][page_index].addend);
res = glue(glue(lds, SUFFIX), _raw)(hostaddr);
}
return res;
}
#endif
#ifndef SOFTMMU_CODE_ACCESS
/* generic store macro */
static inline void
glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr,
RES_TYPE v)
{
int page_index;
target_ulong addr;
int mmu_idx;
addr = ptr;
page_index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
mmu_idx = CPU_MMU_INDEX;
if (unlikely(env->tlb_table[mmu_idx][page_index].addr_write !=
(addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
glue(glue(helper_st, SUFFIX), MMUSUFFIX)(env, addr, v, mmu_idx);
} else {
uintptr_t hostaddr = (uintptr_t)(addr + env->tlb_table[mmu_idx][page_index].addend);
glue(glue(st, SUFFIX), _raw)(hostaddr, v);
}
}
#if DATA_SIZE == 8
static inline float64 glue(cpu_ldfq, MEMSUFFIX)(CPUArchState *env,
target_ulong ptr)
{
union {
float64 d;
uint64_t i;
} u;
u.i = glue(cpu_ldq, MEMSUFFIX)(env, ptr);
return u.d;
}
static inline void glue(cpu_stfq, MEMSUFFIX)(CPUArchState *env,
target_ulong ptr, float64 v)
{
union {
float64 d;
uint64_t i;
} u;
u.d = v;
glue(cpu_stq, MEMSUFFIX)(env, ptr, u.i);
}
#endif /* DATA_SIZE == 8 */
#if DATA_SIZE == 4
static inline float32 glue(cpu_ldfl, MEMSUFFIX)(CPUArchState *env,
target_ulong ptr)
{
union {
float32 f;
uint32_t i;
} u;
u.i = glue(cpu_ldl, MEMSUFFIX)(env, ptr);
return u.f;
}
static inline void glue(cpu_stfl, MEMSUFFIX)(CPUArchState *env,
target_ulong ptr, float32 v)
{
union {
float32 f;
uint32_t i;
} u;
u.f = v;
glue(cpu_stl, MEMSUFFIX)(env, ptr, u.i);
}
#endif /* DATA_SIZE == 4 */
#endif /* !SOFTMMU_CODE_ACCESS */
#undef RES_TYPE
#undef DATA_TYPE
#undef DATA_STYPE
#undef SUFFIX
#undef USUFFIX
#undef DATA_SIZE
#undef MMUSUFFIX
#undef ADDR_READ

View File

@ -16,33 +16,13 @@
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#ifndef CPUTLB_H
#define CPUTLB_H
#if !defined(CONFIG_USER_ONLY)
#include "exec/cpu-common.h"
/* cputlb.c */
void tlb_protect_code(struct uc_struct *uc, ram_addr_t ram_addr);
void tlb_unprotect_code_phys(CPUState *cpu, ram_addr_t ram_addr,
target_ulong vaddr);
void tlb_reset_dirty_range(CPUTLBEntry *tlb_entry,
uintptr_t start, uintptr_t length);
void cpu_tlb_reset_dirty_all(struct uc_struct *uc, ram_addr_t start1, ram_addr_t length);
void tlb_set_dirty(CPUArchState *env, target_ulong vaddr);
//extern int tlb_flush_count;
/* exec.c */
void tb_flush_jmp_cache(CPUState *cpu, target_ulong addr);
MemoryRegionSection *
address_space_translate_for_iotlb(AddressSpace *as, hwaddr addr, hwaddr *xlat,
hwaddr *plen);
hwaddr memory_region_section_get_iotlb(CPUState *cpu,
MemoryRegionSection *section,
target_ulong vaddr,
hwaddr paddr, hwaddr xlat,
int prot,
target_ulong *address);
bool memory_region_is_unassigned(struct uc_struct* uc, MemoryRegion *mr);
#endif
void tlb_unprotect_code(struct uc_struct *uc, ram_addr_t ram_addr);
#endif

View File

@ -17,10 +17,13 @@
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _EXEC_ALL_H_
#define _EXEC_ALL_H_
#ifndef EXEC_ALL_H
#define EXEC_ALL_H
#include "qemu-common.h"
#include "hw/core/cpu.h"
#include "exec/tb-context.h"
#include "exec/cpu_ldst.h"
#include "sysemu/cpus.h"
/* allow to see translation results - the slowdown should be negligible, so we leave it */
#define DEBUG_DISAS
@ -28,289 +31,372 @@
/* Page tracking code uses ram addresses in system mode, and virtual
addresses in userspace mode. Define tb_page_addr_t to be an appropriate
type. */
#if defined(CONFIG_USER_ONLY)
typedef abi_ulong tb_page_addr_t;
#else
typedef ram_addr_t tb_page_addr_t;
#endif
/* is_jmp field values */
#define DISAS_NEXT 0 /* next instruction can be analyzed */
#define DISAS_JUMP 1 /* only pc was modified dynamically */
#define DISAS_UPDATE 2 /* cpu state was modified dynamically */
#define DISAS_TB_JUMP 3 /* only pc was modified statically */
struct TranslationBlock;
typedef struct TranslationBlock TranslationBlock;
/* XXX: make safe guess about sizes */
#define MAX_OP_PER_INSTR 266
#if HOST_LONG_BITS == 32
#define MAX_OPC_PARAM_PER_ARG 2
#else
#define MAX_OPC_PARAM_PER_ARG 1
#endif
#define MAX_OPC_PARAM_IARGS 5
#define MAX_OPC_PARAM_OARGS 1
#define MAX_OPC_PARAM_ARGS (MAX_OPC_PARAM_IARGS + MAX_OPC_PARAM_OARGS)
/* A Call op needs up to 4 + 2N parameters on 32-bit archs,
* and up to 4 + N parameters on 64-bit archs
* (N = number of input arguments + output arguments). */
#define MAX_OPC_PARAM (4 + (MAX_OPC_PARAM_PER_ARG * MAX_OPC_PARAM_ARGS))
#define OPC_MAX_SIZE (OPC_BUF_SIZE - MAX_OP_PER_INSTR)
/* Maximum size a TCG op can expand to. This is complicated because a
single op may require several host instructions and register reloads.
For now take a wild guess at 192 bytes, which should allow at least
a couple of fixup instructions per argument. */
#define TCG_MAX_OP_SIZE 192
#define OPPARAM_BUF_SIZE (OPC_BUF_SIZE * MAX_OPC_PARAM)
#define TB_PAGE_ADDR_FMT RAM_ADDR_FMT
#include "qemu/log.h"
void gen_intermediate_code(CPUArchState *env, struct TranslationBlock *tb);
void gen_intermediate_code_pc(CPUArchState *env, struct TranslationBlock *tb);
void restore_state_to_opc(CPUArchState *env, struct TranslationBlock *tb,
int pc_pos);
bool cpu_restore_state(CPUState *cpu, uintptr_t searched_pc);
void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns);
void restore_state_to_opc(CPUArchState *env, TranslationBlock *tb,
target_ulong *data);
void QEMU_NORETURN cpu_resume_from_signal(CPUState *cpu, void *puc);
/**
* cpu_restore_state:
* @cpu: the vCPU state is to be restore to
* @searched_pc: the host PC the fault occurred at
* @will_exit: true if the TB executed will be interrupted after some
cpu adjustments. Required for maintaining the correct
icount valus
* @return: true if state was restored, false otherwise
*
* Attempt to restore the state for a fault occurring in translated
* code. If the searched_pc is not in translated code no state is
* restored and the function returns false.
*/
bool cpu_restore_state(CPUState *cpu, uintptr_t searched_pc, bool will_exit);
void QEMU_NORETURN cpu_loop_exit_noexc(CPUState *cpu);
void QEMU_NORETURN cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
TranslationBlock *tb_gen_code(CPUState *cpu,
target_ulong pc, target_ulong cs_base, int flags,
target_ulong pc, target_ulong cs_base,
uint32_t flags,
int cflags);
void cpu_exec_init(CPUArchState *env, void *opaque);
void QEMU_NORETURN cpu_loop_exit(CPUState *cpu);
void QEMU_NORETURN cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc);
void QEMU_NORETURN cpu_loop_exit_atomic(CPUState *cpu, uintptr_t pc);
/**
* cpu_loop_exit_requested:
* @cpu: The CPU state to be tested
*
* Indicate if somebody asked for a return of the CPU to the main loop
* (e.g., via cpu_exit() or cpu_interrupt()).
*
* This is helpful for architectures that support interruptible
* instructions. After writing back all state to registers/memory, this
* call can be used to check if it makes sense to return to the main loop
* or to continue executing the interruptible instruction.
*/
static inline bool cpu_loop_exit_requested(CPUState *cpu)
{
return (int32_t)cpu_neg(cpu)->icount_decr.u32 < 0;
}
void cpu_reloading_memory_map(void);
/**
* cpu_address_space_init:
* @cpu: CPU to add this address space to
* @asidx: integer index of this address space
* @mr: the root memory region of address space
*
* Add the specified address space to the CPU's cpu_ases list.
* The address space added with @asidx 0 is the one used for the
* convenience pointer cpu->as.
* The target-specific code which registers ASes is responsible
* for defining what semantics address space 0, 1, 2, etc have.
*
* Before the first call to this function, the caller must set
* cpu->num_ases to the total number of address spaces it needs
* to support.
*/
void cpu_address_space_init(CPUState *cpu, int asidx, MemoryRegion *mr);
void tb_invalidate_phys_page_range(struct uc_struct *uc, tb_page_addr_t start, tb_page_addr_t end,
int is_cpu_write_access);
void tb_invalidate_phys_range(struct uc_struct *uc, tb_page_addr_t start, tb_page_addr_t end,
int is_cpu_write_access);
#if !defined(CONFIG_USER_ONLY)
void tcg_cpu_address_space_init(CPUState *cpu, AddressSpace *as);
/* cputlb.c */
/**
* tlb_init - initialize a CPU's TLB
* @cpu: CPU whose TLB should be initialized
*/
void tlb_init(CPUState *cpu);
/**
* tlb_flush_page:
* @cpu: CPU whose TLB should be flushed
* @addr: virtual address of page to be flushed
*
* Flush one page from the TLB of the specified CPU, for all
* MMU indexes.
*/
void tlb_flush_page(CPUState *cpu, target_ulong addr);
void tlb_flush(CPUState *cpu, int flush_global);
/**
* tlb_flush_page_all_cpus:
* @cpu: src CPU of the flush
* @addr: virtual address of page to be flushed
*
* Flush one page from the TLB of the specified CPU, for all
* MMU indexes.
*/
void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr);
/**
* tlb_flush_page_all_cpus_synced:
* @cpu: src CPU of the flush
* @addr: virtual address of page to be flushed
*
* Flush one page from the TLB of the specified CPU, for all MMU
* indexes like tlb_flush_page_all_cpus except the source vCPUs work
* is scheduled as safe work meaning all flushes will be complete once
* the source vCPUs safe work is complete. This will depend on when
* the guests translation ends the TB.
*/
void tlb_flush_page_all_cpus_synced(CPUState *src, target_ulong addr);
/**
* tlb_flush:
* @cpu: CPU whose TLB should be flushed
*
* Flush the entire TLB for the specified CPU. Most CPU architectures
* allow the implementation to drop entries from the TLB at any time
* so this is generally safe. If more selective flushing is required
* use one of the other functions for efficiency.
*/
void tlb_flush(CPUState *cpu);
/**
* tlb_flush_all_cpus:
* @cpu: src CPU of the flush
*/
void tlb_flush_all_cpus(CPUState *src_cpu);
/**
* tlb_flush_all_cpus_synced:
* @cpu: src CPU of the flush
*
* Like tlb_flush_all_cpus except this except the source vCPUs work is
* scheduled as safe work meaning all flushes will be complete once
* the source vCPUs safe work is complete. This will depend on when
* the guests translation ends the TB.
*/
void tlb_flush_all_cpus_synced(CPUState *src_cpu);
/**
* tlb_flush_page_by_mmuidx:
* @cpu: CPU whose TLB should be flushed
* @addr: virtual address of page to be flushed
* @idxmap: bitmap of MMU indexes to flush
*
* Flush one page from the TLB of the specified CPU, for the specified
* MMU indexes.
*/
void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr,
uint16_t idxmap);
/**
* tlb_flush_page_by_mmuidx_all_cpus:
* @cpu: Originating CPU of the flush
* @addr: virtual address of page to be flushed
* @idxmap: bitmap of MMU indexes to flush
*
* Flush one page from the TLB of all CPUs, for the specified
* MMU indexes.
*/
void tlb_flush_page_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
uint16_t idxmap);
/**
* tlb_flush_page_by_mmuidx_all_cpus_synced:
* @cpu: Originating CPU of the flush
* @addr: virtual address of page to be flushed
* @idxmap: bitmap of MMU indexes to flush
*
* Flush one page from the TLB of all CPUs, for the specified MMU
* indexes like tlb_flush_page_by_mmuidx_all_cpus except the source
* vCPUs work is scheduled as safe work meaning all flushes will be
* complete once the source vCPUs safe work is complete. This will
* depend on when the guests translation ends the TB.
*/
void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *cpu, target_ulong addr,
uint16_t idxmap);
/**
* tlb_flush_by_mmuidx:
* @cpu: CPU whose TLB should be flushed
* @wait: If true ensure synchronisation by exiting the cpu_loop
* @idxmap: bitmap of MMU indexes to flush
*
* Flush all entries from the TLB of the specified CPU, for the specified
* MMU indexes.
*/
void tlb_flush_by_mmuidx(CPUState *cpu, uint16_t idxmap);
/**
* tlb_flush_by_mmuidx_all_cpus:
* @cpu: Originating CPU of the flush
* @idxmap: bitmap of MMU indexes to flush
*
* Flush all entries from all TLBs of all CPUs, for the specified
* MMU indexes.
*/
void tlb_flush_by_mmuidx_all_cpus(CPUState *cpu, uint16_t idxmap);
/**
* tlb_flush_by_mmuidx_all_cpus_synced:
* @cpu: Originating CPU of the flush
* @idxmap: bitmap of MMU indexes to flush
*
* Flush all entries from all TLBs of all CPUs, for the specified
* MMU indexes like tlb_flush_by_mmuidx_all_cpus except except the source
* vCPUs work is scheduled as safe work meaning all flushes will be
* complete once the source vCPUs safe work is complete. This will
* depend on when the guests translation ends the TB.
*/
void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap);
/**
* tlb_set_page_with_attrs:
* @cpu: CPU to add this TLB entry for
* @vaddr: virtual address of page to add entry for
* @paddr: physical address of the page
* @attrs: memory transaction attributes
* @prot: access permissions (PAGE_READ/PAGE_WRITE/PAGE_EXEC bits)
* @mmu_idx: MMU index to insert TLB entry for
* @size: size of the page in bytes
*
* Add an entry to this CPU's TLB (a mapping from virtual address
* @vaddr to physical address @paddr) with the specified memory
* transaction attributes. This is generally called by the target CPU
* specific code after it has been called through the tlb_fill()
* entry point and performed a successful page table walk to find
* the physical address and attributes for the virtual address
* which provoked the TLB miss.
*
* At most one entry for a given virtual address is permitted. Only a
* single TARGET_PAGE_SIZE region is mapped; the supplied @size is only
* used by tlb_flush_page.
*/
void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
hwaddr paddr, MemTxAttrs attrs,
int prot, int mmu_idx, target_ulong size);
/* tlb_set_page:
*
* This function is equivalent to calling tlb_set_page_with_attrs()
* with an @attrs argument of MEMTXATTRS_UNSPECIFIED. It's provided
* as a convenience for CPUs which don't use memory transaction attributes.
*/
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
hwaddr paddr, int prot,
int mmu_idx, target_ulong size);
void *probe_access(CPUArchState *env, target_ulong addr, int size,
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr);
void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
#else
static inline void tlb_flush_page(CPUState *cpu, target_ulong addr)
static inline void *probe_write(CPUArchState *env, target_ulong addr, int size,
int mmu_idx, uintptr_t retaddr)
{
return probe_access(env, addr, size, MMU_DATA_STORE, mmu_idx, retaddr);
}
static inline void tlb_flush(CPUState *cpu, int flush_global)
static inline void *probe_read(CPUArchState *env, target_ulong addr, int size,
int mmu_idx, uintptr_t retaddr)
{
return probe_access(env, addr, size, MMU_DATA_LOAD, mmu_idx, retaddr);
}
#endif
#define CODE_GEN_ALIGN 16 /* must be >= of the size of a icache line */
#define CODE_GEN_PHYS_HASH_BITS 15
#define CODE_GEN_PHYS_HASH_SIZE (1 << CODE_GEN_PHYS_HASH_BITS)
/* Estimated block size for TB allocation. */
/* ??? The following is based on a 2015 survey of x86_64 host output.
Better would seem to be some sort of dynamically sized TB array,
adapting to the block sizes actually being produced. */
#define CODE_GEN_AVG_BLOCK_SIZE 400
/* estimated block size for TB allocation */
/* XXX: use a per code average code fragment size and modulate it
according to the host CPU */
#if defined(CONFIG_SOFTMMU)
#define CODE_GEN_AVG_BLOCK_SIZE 128
#else
#define CODE_GEN_AVG_BLOCK_SIZE 64
#endif
#if defined(__arm__) || defined(_ARCH_PPC) \
|| defined(__x86_64__) || defined(__i386__) \
|| defined(__sparc__) || defined(__aarch64__) \
|| defined(__s390x__) || defined(__mips__) \
|| defined(CONFIG_TCG_INTERPRETER)
#define USE_DIRECT_JUMP
#endif
/*
* Translation Cache-related fields of a TB.
* This struct exists just for convenience; we keep track of TB's in a binary
* search tree, and the only fields needed to compare TB's in the tree are
* @ptr and @size.
* Note: the address of search data can be obtained by adding @size to @ptr.
*/
struct tb_tc {
void *ptr; /* pointer to the translated code */
size_t size;
};
struct TranslationBlock {
target_ulong pc; /* simulated PC corresponding to this block (EIP + CS base) */
target_ulong cs_base; /* CS base for this block */
uint64_t flags; /* flags defining in which context the code was generated */
uint32_t flags; /* flags defining in which context the code was generated */
uint16_t size; /* size of target code for this block (1 <=
size <= TARGET_PAGE_SIZE) */
uint16_t cflags; /* compile flags */
#define CF_COUNT_MASK 0x7fff
#define CF_LAST_IO 0x8000 /* Last insn may be an IO access. */
uint16_t icount;
uint32_t cflags; /* compile flags */
#define CF_COUNT_MASK 0x00007fff
#define CF_LAST_IO 0x00008000 /* Last insn may be an IO access. */
#define CF_NOCACHE 0x00010000 /* To be freed after execution */
#define CF_USE_ICOUNT 0x00020000
#define CF_INVALID 0x00040000 /* TB is stale. Set with @jmp_lock held */
#define CF_PARALLEL 0x00080000 /* Generate code for a parallel context */
#define CF_CLUSTER_MASK 0xff000000 /* Top 8 bits are cluster ID */
#define CF_CLUSTER_SHIFT 24
/* cflags' mask for hashing/comparison */
#define CF_HASH_MASK \
(CF_COUNT_MASK | CF_LAST_IO | CF_USE_ICOUNT | CF_PARALLEL | CF_CLUSTER_MASK)
void *tc_ptr; /* pointer to the translated code */
/* next matching tb for physical address. */
struct TranslationBlock *phys_hash_next;
/* Per-vCPU dynamic tracing state used to generate this TB */
uint32_t trace_vcpu_dstate;
struct tb_tc tc;
/* original tb when cflags has CF_NOCACHE */
struct TranslationBlock *orig_tb;
/* first and second physical page containing code. The lower bit
of the pointer tells the index in page_next[] */
struct TranslationBlock *page_next[2];
of the pointer tells the index in page_next[].
The list is protected by the TB's page('s) lock(s) */
uintptr_t page_next[2];
tb_page_addr_t page_addr[2];
/* the following data are used to directly call another TB from
the code of this one. */
uint16_t tb_next_offset[2]; /* offset of original jump target */
#ifdef USE_DIRECT_JUMP
uint16_t tb_jmp_offset[2]; /* offset of jump instruction */
#else
uintptr_t tb_next[2]; /* address of jump generated code */
#endif
/* list of TBs jumping to this one. This is a circular list using
the two least significant bits of the pointers to tell what is
the next pointer: 0 = jmp_next[0], 1 = jmp_next[1], 2 =
jmp_first */
struct TranslationBlock *jmp_next[2];
struct TranslationBlock *jmp_first;
uint32_t icount;
/* The following data are used to directly call another TB from
* the code of this one. This can be done either by emitting direct or
* indirect native jump instructions. These jumps are reset so that the TB
* just continues its execution. The TB can be linked to another one by
* setting one of the jump targets (or patching the jump instruction). Only
* two of such jumps are supported.
*/
uint16_t jmp_reset_offset[2]; /* offset of original jump target */
#define TB_JMP_RESET_OFFSET_INVALID 0xffff /* indicates no jump generated */
uintptr_t jmp_target_arg[2]; /* target address or offset */
/*
* Each TB has a NULL-terminated list (jmp_list_head) of incoming jumps.
* Each TB can have two outgoing jumps, and therefore can participate
* in two lists. The list entries are kept in jmp_list_next[2]. The least
* significant bit (LSB) of the pointers in these lists is used to encode
* which of the two list entries is to be used in the pointed TB.
*
* List traversals are protected by jmp_lock. The destination TB of each
* outgoing jump is kept in jmp_dest[] so that the appropriate jmp_lock
* can be acquired from any origin TB.
*
* jmp_dest[] are tagged pointers as well. The LSB is set when the TB is
* being invalidated, so that no further outgoing jumps from it can be set.
*
* jmp_lock also protects the CF_INVALID cflag; a jump must not be chained
* to a destination TB that has CF_INVALID set.
*/
uintptr_t jmp_list_head;
uintptr_t jmp_list_next[2];
uintptr_t jmp_dest[2];
uint32_t hash; // unicorn needs this hash to remove this TB from QHT cache
};
typedef struct TBContext TBContext;
// extern bool parallel_cpus;
struct TBContext {
TranslationBlock *tbs;
TranslationBlock *tb_phys_hash[CODE_GEN_PHYS_HASH_SIZE];
int nb_tbs;
/* statistics */
int tb_flush_count;
int tb_phys_invalidate_count;
int tb_invalidated_flag;
};
static inline unsigned int tb_jmp_cache_hash_page(target_ulong pc)
/* Hide the atomic_read to make code a little easier on the eyes */
static inline uint32_t tb_cflags(const TranslationBlock *tb)
{
target_ulong tmp;
tmp = pc ^ (pc >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS));
return (tmp >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS)) & TB_JMP_PAGE_MASK;
return tb->cflags;
}
static inline unsigned int tb_jmp_cache_hash_func(target_ulong pc)
/* current cflags for hashing/comparison */
static inline uint32_t curr_cflags(void)
{
target_ulong tmp;
tmp = pc ^ (pc >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS));
return (((tmp >> (TARGET_PAGE_BITS - TB_JMP_PAGE_BITS)) & TB_JMP_PAGE_MASK)
| (tmp & TB_JMP_ADDR_MASK));
return 0;
}
static inline unsigned int tb_phys_hash_func(tb_page_addr_t pc)
{
return (pc >> 2) & (CODE_GEN_PHYS_HASH_SIZE - 1);
}
/* TranslationBlock invalidate API */
void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
void tb_flush(CPUState *cpu);
void tb_phys_invalidate(TCGContext *tcg_ctx, TranslationBlock *tb, tb_page_addr_t page_addr);
TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
target_ulong cs_base, uint32_t flags,
uint32_t cf_mask);
void tb_set_jmp_target(TranslationBlock *tb, int n, uintptr_t addr);
void tb_exec_lock(TCGContext*);
void tb_exec_unlock(TCGContext*);
void tb_free(struct uc_struct *uc, TranslationBlock *tb);
void tb_flush(CPUArchState *env);
void tb_phys_invalidate(struct uc_struct *uc,
TranslationBlock *tb, tb_page_addr_t page_addr);
#if defined(USE_DIRECT_JUMP)
#if defined(CONFIG_TCG_INTERPRETER)
static inline void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr)
{
/* patch the branch destination */
*(uint32_t *)jmp_addr = addr - (jmp_addr + 4);
/* no need to flush icache explicitly */
}
#elif defined(_ARCH_PPC)
void ppc_tb_set_jmp_target(uintptr_t jmp_addr, uintptr_t addr);
#define tb_set_jmp_target1 ppc_tb_set_jmp_target
#elif defined(__i386__) || defined(__x86_64__)
static inline void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr)
{
/* patch the branch destination */
stl_le_p((void*)jmp_addr, addr - (jmp_addr + 4));
/* no need to flush icache explicitly */
}
#elif defined(__s390x__)
static inline void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr)
{
/* patch the branch destination */
intptr_t disp = addr - (jmp_addr - 2);
stl_be_p((void*)jmp_addr, disp / 2);
/* no need to flush icache explicitly */
}
#elif defined(__aarch64__)
void aarch64_tb_set_jmp_target(uintptr_t jmp_addr, uintptr_t addr);
#define tb_set_jmp_target1 aarch64_tb_set_jmp_target
#elif defined(__arm__)
static inline void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr)
{
#if !QEMU_GNUC_PREREQ(4, 1)
register unsigned long _beg __asm ("a1");
register unsigned long _end __asm ("a2");
register unsigned long _flg __asm ("a3");
#endif
/* we could use a ldr pc, [pc, #-4] kind of branch and avoid the flush */
*(uint32_t *)jmp_addr =
(*(uint32_t *)jmp_addr & ~0xffffff)
| (((addr - (jmp_addr + 8)) >> 2) & 0xffffff);
#if QEMU_GNUC_PREREQ(4, 1)
__builtin___clear_cache((char *) jmp_addr, (char *) jmp_addr + 4);
#else
/* flush icache */
_beg = jmp_addr;
_end = jmp_addr + 4;
_flg = 0;
__asm __volatile__ ("swi 0x9f0002" : : "r" (_beg), "r" (_end), "r" (_flg));
#endif
}
#elif defined(__sparc__) || defined(__mips__)
void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr);
#else
#error tb_set_jmp_target1 is missing
#endif
static inline void tb_set_jmp_target(TranslationBlock *tb,
int n, uintptr_t addr)
{
uint16_t offset = tb->tb_jmp_offset[n];
tb_set_jmp_target1((uintptr_t)((char*)tb->tc_ptr + offset), addr);
}
#else
/* set the jump target */
static inline void tb_set_jmp_target(TranslationBlock *tb,
int n, uintptr_t addr)
{
tb->tb_next[n] = addr;
}
#endif
static inline void tb_add_jump(TranslationBlock *tb, int n,
TranslationBlock *tb_next)
{
/* NOTE: this test is only needed for thread safety */
if (!tb->jmp_next[n]) {
/* patch the native jump address */
tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
/* add in TB jmp circular list */
tb->jmp_next[n] = tb_next->jmp_first;
tb_next->jmp_first = (TranslationBlock *)((uintptr_t)(tb) | (n));
}
}
/* GETRA is the true target of the return instruction that we'll execute,
defined here for simplicity of defining the follow-up macros. */
#if defined(CONFIG_TCG_INTERPRETER)
extern uintptr_t tci_tb_ptr;
# define GETRA() tci_tb_ptr
#elif defined(_MSC_VER)
/* GETPC is the true target of the return instruction that we'll execute. */
#ifdef _MSC_VER
#include <intrin.h>
# define GETRA() (uintptr_t)_ReturnAddress()
# define GETPC() (uintptr_t)_ReturnAddress()
#else
# define GETRA() \
# define GETPC() \
((uintptr_t)__builtin_extract_return_addr(__builtin_return_address(0)))
#endif
@ -321,59 +407,73 @@ extern uintptr_t tci_tb_ptr;
to indicate the compressed mode; subtracting two works around that. It
is also the case that there are no host isas that contain a call insn
smaller than 4 bytes, so we don't worry about special-casing this. */
#if defined(CONFIG_TCG_INTERPRETER)
# define GETPC_ADJ 0
#define GETPC_ADJ 2
#if defined(CONFIG_DEBUG_TCG)
void assert_no_pages_locked(void);
#else
# define GETPC_ADJ 2
#endif
#define GETPC() (GETRA() - GETPC_ADJ)
#if !defined(CONFIG_USER_ONLY)
void phys_mem_set_alloc(void *(*alloc)(size_t, uint64_t *align));
struct MemoryRegion *iotlb_to_region(AddressSpace *as, hwaddr index);
bool io_mem_read(struct MemoryRegion *mr, hwaddr addr,
uint64_t *pvalue, unsigned size);
bool io_mem_write(struct MemoryRegion *mr, hwaddr addr,
uint64_t value, unsigned size);
void tlb_fill(CPUState *cpu, target_ulong addr, int is_write, int mmu_idx,
uintptr_t retaddr);
#endif
#if defined(CONFIG_USER_ONLY)
static inline tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
static inline void assert_no_pages_locked(void)
{
return addr;
}
#else
/* cputlb.c */
tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr);
#endif
/* vl.c */
extern int singlestep;
/* cpu-exec.c */
extern volatile sig_atomic_t exit_request;
/**
* cpu_can_do_io:
* @cpu: The CPU for which to check IO.
* iotlb_to_section:
* @cpu: CPU performing the access
* @index: TCG CPU IOTLB entry
*
* Deterministic execution requires that IO only be performed on the last
* instruction of a TB so that interrupts take effect immediately.
*
* Returns: %true if memory-mapped IO is safe, %false otherwise.
* Given a TCG CPU IOTLB entry, return the MemoryRegionSection that
* it refers to. @index will have been initially created and returned
* by memory_region_section_get_iotlb().
*/
static inline bool cpu_can_do_io(CPUState *cpu)
{
return true;
}
struct MemoryRegionSection *iotlb_to_section(CPUState *cpu,
hwaddr index, MemTxAttrs attrs);
void phys_mem_clean(struct uc_struct* uc);
static inline void mmap_lock(void) {}
static inline void mmap_unlock(void) {}
/**
* get_page_addr_code() - full-system version
* @env: CPUArchState
* @addr: guest virtual address of guest code
*
* If we cannot translate and execute from the entire RAM page, or if
* the region is not backed by RAM, returns -1. Otherwise, returns the
* ram_addr_t corresponding to the guest code at @addr.
*
* Note: this function can trigger an exception.
*/
tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr);
/**
* get_page_addr_code_hostp() - full-system version
* @env: CPUArchState
* @addr: guest virtual address of guest code
*
* See get_page_addr_code() (full-system version) for documentation on the
* return value.
*
* Sets *@hostp (when @hostp is non-NULL) as follows.
* If the return value is -1, sets *@hostp to NULL. Otherwise, sets *@hostp
* to the host address where @addr's content is kept.
*
* Note: this function can trigger an exception.
*/
tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
void **hostp);
void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length);
void tlb_set_dirty(CPUState *cpu, target_ulong vaddr);
/* exec.c */
void tb_flush_jmp_cache(CPUState *cpu, target_ulong addr);
MemoryRegionSection *
address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
hwaddr *xlat, hwaddr *plen,
MemTxAttrs attrs, int *prot);
hwaddr memory_region_section_get_iotlb(CPUState *cpu,
MemoryRegionSection *section);
#endif

View File

@ -1,72 +1,69 @@
#ifndef GEN_ICOUNT_H
#define GEN_ICOUNT_H 1
#define GEN_ICOUNT_H
#include "qemu/timer.h"
/* Helpers for instruction counting code generation. */
//static TCGArg *icount_arg;
//static int icount_label;
static inline void gen_tb_start(TCGContext *tcg_ctx)
static inline void gen_io_start(TCGContext *tcg_ctx)
{
// TCGv_i32 count;
TCGv_i32 flag;
TCGv_i32 tmp = tcg_const_i32(tcg_ctx, 1);
tcg_gen_st_i32(tcg_ctx, tmp, tcg_ctx->cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
tcg_temp_free_i32(tcg_ctx, tmp);
}
/*
* cpu->can_do_io is cleared automatically at the beginning of
* each translation block. The cost is minimal and only paid
* for -icount, plus it would be very easy to forget doing it
* in the translator. Therefore, backends only need to call
* gen_io_start.
*/
static inline void gen_io_end(TCGContext *tcg_ctx)
{
TCGv_i32 tmp = tcg_const_i32(tcg_ctx, 0);
tcg_gen_st_i32(tcg_ctx, tmp, tcg_ctx->cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
tcg_temp_free_i32(tcg_ctx, tmp);
}
static inline void gen_tb_start(TCGContext *tcg_ctx, TranslationBlock *tb)
{
TCGv_i32 count;
tcg_ctx->exitreq_label = gen_new_label(tcg_ctx);
flag = tcg_temp_new_i32(tcg_ctx);
tcg_gen_ld_i32(tcg_ctx, flag, tcg_ctx->cpu_env,
offsetof(CPUState, tcg_exit_req) - ENV_OFFSET);
tcg_gen_brcondi_i32(tcg_ctx, TCG_COND_NE, flag, 0, tcg_ctx->exitreq_label);
tcg_temp_free_i32(tcg_ctx, flag);
#if 0
if (!use_icount)
// first TB ever does not need to check exit request
if (tcg_ctx->uc->first_tb) {
// next TB is not the first anymore
tcg_ctx->uc->first_tb = false;
return;
}
icount_label = gen_new_label();
count = tcg_temp_local_new_i32();
tcg_gen_ld_i32(count, cpu_env,
-ENV_OFFSET + offsetof(CPUState, icount_decr.u32));
/* This is a horrid hack to allow fixing up the value later. */
icount_arg = tcg_ctx.gen_opparam_ptr + 1;
tcg_gen_subi_i32(count, count, 0xdeadbeef);
count = tcg_temp_new_i32(tcg_ctx);
tcg_gen_brcondi_i32(TCG_COND_LT, count, 0, icount_label);
tcg_gen_st16_i32(count, cpu_env,
-ENV_OFFSET + offsetof(CPUState, icount_decr.u16.low));
tcg_temp_free_i32(count);
#endif
tcg_gen_ld_i32(tcg_ctx, count, tcg_ctx->cpu_env,
offsetof(ArchCPU, neg.icount_decr.u32) -
offsetof(ArchCPU, env));
tcg_gen_brcondi_i32(tcg_ctx, TCG_COND_LT, count, 0, tcg_ctx->exitreq_label);
tcg_temp_free_i32(tcg_ctx, count);
}
static inline void gen_tb_end(TCGContext *tcg_ctx, TranslationBlock *tb, int num_insns)
{
gen_set_label(tcg_ctx, tcg_ctx->exitreq_label);
tcg_gen_exit_tb(tcg_ctx, (uintptr_t)tb + TB_EXIT_REQUESTED);
#if 0
if (use_icount) {
*icount_arg = num_insns;
gen_set_label(icount_label);
tcg_gen_exit_tb((uintptr_t)tb + TB_EXIT_ICOUNT_EXPIRED);
if (tb_cflags(tb) & CF_USE_ICOUNT) {
/* Update the num_insn immediate parameter now that we know
* the actual insn count. */
tcg_set_insn_param(tcg_ctx->icount_start_insn, 1, num_insns);
}
#endif
gen_set_label(tcg_ctx, tcg_ctx->exitreq_label);
tcg_gen_exit_tb(tcg_ctx, tb, TB_EXIT_REQUESTED);
}
#if 0
static inline void gen_io_start(void)
{
TCGv_i32 tmp = tcg_const_i32(1);
tcg_gen_st_i32(tmp, cpu_env, -ENV_OFFSET + offsetof(CPUState, can_do_io));
tcg_temp_free_i32(tmp);
}
static inline void gen_io_end(void)
{
TCGv_i32 tmp = tcg_const_i32(0);
tcg_gen_st_i32(tmp, cpu_env, -ENV_OFFSET + offsetof(CPUState, can_do_io));
tcg_temp_free_i32(tmp);
}
#endif
#endif

View File

@ -2,38 +2,38 @@
This one expands generation functions for tcg opcodes. */
#ifndef HELPER_GEN_H
#define HELPER_GEN_H 1
#define HELPER_GEN_H
#include <exec/helper-head.h>
#include "exec/helper-head.h"
#define DEF_HELPER_FLAGS_0(name, flags, ret) \
static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl0(ret)) \
{ \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 0, NULL); \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 0, NULL); \
}
#define DEF_HELPER_FLAGS_1(name, flags, ret, t1) \
static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl(ret) \
dh_arg_decl(t1, 1)) \
dh_arg_decl(t1, 1)) \
{ \
TCGArg args[1] = { dh_arg(t1, 1) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 1, args); \
TCGTemp *args[1] = { dh_arg(t1, 1) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 1, args); \
}
#define DEF_HELPER_FLAGS_2(name, flags, ret, t1, t2) \
static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl(ret) \
dh_arg_decl(t1, 1), dh_arg_decl(t2, 2)) \
{ \
TCGArg args[2] = { dh_arg(t1, 1), dh_arg(t2, 2) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 2, args); \
TCGTemp *args[2] = { dh_arg(t1, 1), dh_arg(t2, 2) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 2, args); \
}
#define DEF_HELPER_FLAGS_3(name, flags, ret, t1, t2, t3) \
static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl(ret) \
dh_arg_decl(t1, 1), dh_arg_decl(t2, 2), dh_arg_decl(t3, 3)) \
{ \
TCGArg args[3] = { dh_arg(t1, 1), dh_arg(t2, 2), dh_arg(t3, 3) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 3, args); \
TCGTemp *args[3] = { dh_arg(t1, 1), dh_arg(t2, 2), dh_arg(t3, 3) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 3, args); \
}
#define DEF_HELPER_FLAGS_4(name, flags, ret, t1, t2, t3, t4) \
@ -41,9 +41,9 @@ static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl(r
dh_arg_decl(t1, 1), dh_arg_decl(t2, 2), \
dh_arg_decl(t3, 3), dh_arg_decl(t4, 4)) \
{ \
TCGArg args[4] = { dh_arg(t1, 1), dh_arg(t2, 2), \
TCGTemp *args[4] = { dh_arg(t1, 1), dh_arg(t2, 2), \
dh_arg(t3, 3), dh_arg(t4, 4) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 4, args); \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 4, args); \
}
#define DEF_HELPER_FLAGS_5(name, flags, ret, t1, t2, t3, t4, t5) \
@ -51,13 +51,35 @@ static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl(r
dh_arg_decl(t1, 1), dh_arg_decl(t2, 2), dh_arg_decl(t3, 3), \
dh_arg_decl(t4, 4), dh_arg_decl(t5, 5)) \
{ \
TCGArg args[5] = { dh_arg(t1, 1), dh_arg(t2, 2), dh_arg(t3, 3), \
TCGTemp *args[5] = { dh_arg(t1, 1), dh_arg(t2, 2), dh_arg(t3, 3), \
dh_arg(t4, 4), dh_arg(t5, 5) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 5, args); \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 5, args); \
}
#define DEF_HELPER_FLAGS_6(name, flags, ret, t1, t2, t3, t4, t5, t6) \
static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl(ret) \
dh_arg_decl(t1, 1), dh_arg_decl(t2, 2), dh_arg_decl(t3, 3), \
dh_arg_decl(t4, 4), dh_arg_decl(t5, 5), dh_arg_decl(t6, 6)) \
{ \
TCGTemp *args[6] = { dh_arg(t1, 1), dh_arg(t2, 2), dh_arg(t3, 3), \
dh_arg(t4, 4), dh_arg(t5, 5), dh_arg(t6, 6) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 6, args); \
}
#define DEF_HELPER_FLAGS_7(name, flags, ret, t1, t2, t3, t4, t5, t6, t7)\
static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl(ret) \
dh_arg_decl(t1, 1), dh_arg_decl(t2, 2), dh_arg_decl(t3, 3), \
dh_arg_decl(t4, 4), dh_arg_decl(t5, 5), dh_arg_decl(t6, 6), \
dh_arg_decl(t7, 7)) \
{ \
TCGTemp *args[7] = { dh_arg(t1, 1), dh_arg(t2, 2), dh_arg(t3, 3), \
dh_arg(t4, 4), dh_arg(t5, 5), dh_arg(t6, 6), \
dh_arg(t7, 7) }; \
tcg_gen_callN(tcg_ctx, HELPER(name), dh_retvar(ret), 7, args); \
}
#include "helper.h"
#include "tcg-runtime.h"
#include "accel/tcg/tcg-runtime.h"
#undef DEF_HELPER_FLAGS_0
#undef DEF_HELPER_FLAGS_1
@ -65,6 +87,8 @@ static inline void glue(gen_helper_, name)(TCGContext *tcg_ctx, dh_retvar_decl(r
#undef DEF_HELPER_FLAGS_3
#undef DEF_HELPER_FLAGS_4
#undef DEF_HELPER_FLAGS_5
#undef DEF_HELPER_FLAGS_6
#undef DEF_HELPER_FLAGS_7
#undef GEN_HELPER
#endif /* HELPER_GEN_H */

View File

@ -15,36 +15,24 @@
GEN_HELPER 2 to do runtime registration helper functions.
*/
#ifndef DEF_HELPER_H
#define DEF_HELPER_H 1
#include "qemu/osdep.h"
#ifndef EXEC_HELPER_HEAD_H
#define EXEC_HELPER_HEAD_H
#define HELPER(name) glue(helper_, name)
#define GET_TCGV_i32 GET_TCGV_I32
#define GET_TCGV_i64 GET_TCGV_I64
#define GET_TCGV_ptr GET_TCGV_PTR
/* Some types that make sense in C, but not for TCG. */
#define dh_alias_i32 i32
#define dh_alias_s32 i32
#define dh_alias_int i32
#define dh_alias_i64 i64
#define dh_alias_s64 i64
#define dh_alias_f16 i32
#define dh_alias_f32 i32
#define dh_alias_f64 i64
#ifdef TARGET_LONG_BITS
# if TARGET_LONG_BITS == 32
# define dh_alias_tl i32
# else
# define dh_alias_tl i64
# endif
#endif
#define dh_alias_ptr ptr
#define dh_alias_cptr ptr
#define dh_alias_void void
#define dh_alias_noreturn noreturn
#define dh_alias_env ptr
#define dh_alias(t) glue(dh_alias_, t)
#define dh_ctype_i32 uint32_t
@ -52,15 +40,28 @@
#define dh_ctype_int int
#define dh_ctype_i64 uint64_t
#define dh_ctype_s64 int64_t
#define dh_ctype_f16 uint32_t
#define dh_ctype_f32 float32
#define dh_ctype_f64 float64
#define dh_ctype_tl target_ulong
#define dh_ctype_ptr void *
#define dh_ctype_cptr const void *
#define dh_ctype_void void
#define dh_ctype_noreturn void QEMU_NORETURN
#define dh_ctype_env CPUArchState *
#define dh_ctype(t) dh_ctype_##t
#ifdef NEED_CPU_H
# ifdef TARGET_LONG_BITS
# if TARGET_LONG_BITS == 32
# define dh_alias_tl i32
# else
# define dh_alias_tl i64
# endif
# endif
# define dh_alias_env ptr
# define dh_ctype_tl target_ulong
# define dh_ctype_env CPUArchState *
#endif
/* We can't use glue() here because it falls foul of C preprocessor
recursive expansion rules. */
#define dh_retvar_decl0_void void
@ -77,11 +78,11 @@
#define dh_retvar_decl_ptr TCGv_ptr retval,
#define dh_retvar_decl(t) glue(dh_retvar_decl_, dh_alias(t))
#define dh_retvar_void TCG_CALL_DUMMY_ARG
#define dh_retvar_noreturn TCG_CALL_DUMMY_ARG
#define dh_retvar_i32 GET_TCGV_i32(retval)
#define dh_retvar_i64 GET_TCGV_i64(retval)
#define dh_retvar_ptr GET_TCGV_ptr(retval)
#define dh_retvar_void NULL
#define dh_retvar_noreturn NULL
#define dh_retvar_i32 tcgv_i32_temp(tcg_ctx, retval)
#define dh_retvar_i64 tcgv_i64_temp(tcg_ctx, retval)
#define dh_retvar_ptr tcgv_ptr_temp(tcg_ctx, retval)
#define dh_retvar(t) glue(dh_retvar_, dh_alias(t))
#define dh_is_64bit_void 0
@ -89,6 +90,7 @@
#define dh_is_64bit_i32 0
#define dh_is_64bit_i64 1
#define dh_is_64bit_ptr (sizeof(void *) == 8)
#define dh_is_64bit_cptr dh_is_64bit_ptr
#define dh_is_64bit(t) glue(dh_is_64bit_, dh_alias(t))
#define dh_is_signed_void 0
@ -97,6 +99,7 @@
#define dh_is_signed_s32 1
#define dh_is_signed_i64 0
#define dh_is_signed_s64 1
#define dh_is_signed_f16 0
#define dh_is_signed_f32 0
#define dh_is_signed_f64 0
#define dh_is_signed_tl 0
@ -105,14 +108,29 @@
extension instructions that may be required, e.g. ia64's addp4. But
for now we don't support any 64-bit targets with 32-bit pointers. */
#define dh_is_signed_ptr 0
#define dh_is_signed_cptr dh_is_signed_ptr
#define dh_is_signed_env dh_is_signed_ptr
#define dh_is_signed(t) dh_is_signed_##t
#define dh_callflag_i32 0
#define dh_callflag_s32 0
#define dh_callflag_int 0
#define dh_callflag_i64 0
#define dh_callflag_s64 0
#define dh_callflag_f16 0
#define dh_callflag_f32 0
#define dh_callflag_f64 0
#define dh_callflag_ptr 0
#define dh_callflag_cptr dh_callflag_ptr
#define dh_callflag_void 0
#define dh_callflag_noreturn TCG_CALL_NO_RETURN
#define dh_callflag(t) glue(dh_callflag_, dh_alias(t))
#define dh_sizemask(t, n) \
((dh_is_64bit(t) << (n*2)) | (dh_is_signed(t) << (n*2+1)))
#define dh_arg(t, n) \
glue(GET_TCGV_, dh_alias(t))(glue(arg, n))
glue(glue(tcgv_, dh_alias(t)), _temp)(tcg_ctx, glue(arg, n))
#define dh_arg_decl(t, n) glue(TCGv_, dh_alias(t)) glue(arg, n)
@ -128,7 +146,11 @@
DEF_HELPER_FLAGS_4(name, 0, ret, t1, t2, t3, t4)
#define DEF_HELPER_5(name, ret, t1, t2, t3, t4, t5) \
DEF_HELPER_FLAGS_5(name, 0, ret, t1, t2, t3, t4, t5)
#define DEF_HELPER_6(name, ret, t1, t2, t3, t4, t5, t6) \
DEF_HELPER_FLAGS_6(name, 0, ret, t1, t2, t3, t4, t5, t6)
#define DEF_HELPER_7(name, ret, t1, t2, t3, t4, t5, t6, t7) \
DEF_HELPER_FLAGS_7(name, 0, ret, t1, t2, t3, t4, t5, t6, t7)
/* MAX_OPC_PARAM_IARGS must be set to n if last entry is DEF_HELPER_FLAGS_n. */
#endif /* DEF_HELPER_H */
#endif /* EXEC_HELPER_HEAD_H */

View File

@ -2,9 +2,9 @@
This one expands prototypes for the helper functions. */
#ifndef HELPER_PROTO_H
#define HELPER_PROTO_H 1
#define HELPER_PROTO_H
#include <exec/helper-head.h>
#include "exec/helper-head.h"
#define DEF_HELPER_FLAGS_0(name, flags, ret) \
dh_ctype(ret) HELPER(name) (void);
@ -26,8 +26,17 @@ dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \
dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \
dh_ctype(t4), dh_ctype(t5));
#define DEF_HELPER_FLAGS_6(name, flags, ret, t1, t2, t3, t4, t5, t6) \
dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \
dh_ctype(t4), dh_ctype(t5), dh_ctype(t6));
#define DEF_HELPER_FLAGS_7(name, flags, ret, t1, t2, t3, t4, t5, t6, t7) \
dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \
dh_ctype(t4), dh_ctype(t5), dh_ctype(t6), \
dh_ctype(t7));
#include "helper.h"
#include "tcg-runtime.h"
#include "accel/tcg/tcg-runtime.h"
#undef DEF_HELPER_FLAGS_0
#undef DEF_HELPER_FLAGS_1
@ -35,5 +44,7 @@ dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \
#undef DEF_HELPER_FLAGS_3
#undef DEF_HELPER_FLAGS_4
#undef DEF_HELPER_FLAGS_5
#undef DEF_HELPER_FLAGS_6
#undef DEF_HELPER_FLAGS_7
#endif /* HELPER_PROTO_H */

View File

@ -2,47 +2,73 @@
This one defines data structures private to tcg.c. */
#ifndef HELPER_TCG_H
#define HELPER_TCG_H 1
#define HELPER_TCG_H
#include <exec/helper-head.h>
#include "exec/helper-head.h"
/* Need one more level of indirection before stringification
to get all the macros expanded first. */
#define str(s) #s
#define DEF_HELPER_FLAGS_0(NAME, FLAGS, ret) \
{ HELPER(NAME), #NAME, FLAGS, \
dh_sizemask(ret, 0) },
{ .func = HELPER(NAME), .name = str(NAME), \
.flags = FLAGS | dh_callflag(ret), \
.sizemask = dh_sizemask(ret, 0) },
#define DEF_HELPER_FLAGS_1(NAME, FLAGS, ret, t1) \
{ HELPER(NAME), #NAME, FLAGS, \
dh_sizemask(ret, 0) | dh_sizemask(t1, 1) },
{ .func = HELPER(NAME), .name = str(NAME), \
.flags = FLAGS | dh_callflag(ret), \
.sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) },
#define DEF_HELPER_FLAGS_2(NAME, FLAGS, ret, t1, t2) \
{ HELPER(NAME), #NAME, FLAGS, \
dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
{ .func = HELPER(NAME), .name = str(NAME), \
.flags = FLAGS | dh_callflag(ret), \
.sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
| dh_sizemask(t2, 2) },
#define DEF_HELPER_FLAGS_3(NAME, FLAGS, ret, t1, t2, t3) \
{ HELPER(NAME), #NAME, FLAGS, \
dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
{ .func = HELPER(NAME), .name = str(NAME), \
.flags = FLAGS | dh_callflag(ret), \
.sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
| dh_sizemask(t2, 2) | dh_sizemask(t3, 3) },
#define DEF_HELPER_FLAGS_4(NAME, FLAGS, ret, t1, t2, t3, t4) \
{ HELPER(NAME), #NAME, FLAGS, \
dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
{ .func = HELPER(NAME), .name = str(NAME), \
.flags = FLAGS | dh_callflag(ret), \
.sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
| dh_sizemask(t2, 2) | dh_sizemask(t3, 3) | dh_sizemask(t4, 4) },
#define DEF_HELPER_FLAGS_5(NAME, FLAGS, ret, t1, t2, t3, t4, t5) \
{ HELPER(NAME), #NAME, FLAGS, \
dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
{ .func = HELPER(NAME), .name = str(NAME), \
.flags = FLAGS | dh_callflag(ret), \
.sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
| dh_sizemask(t2, 2) | dh_sizemask(t3, 3) | dh_sizemask(t4, 4) \
| dh_sizemask(t5, 5) },
#include "helper.h"
#include "tcg-runtime.h"
#define DEF_HELPER_FLAGS_6(NAME, FLAGS, ret, t1, t2, t3, t4, t5, t6) \
{ .func = HELPER(NAME), .name = str(NAME), \
.flags = FLAGS | dh_callflag(ret), \
.sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
| dh_sizemask(t2, 2) | dh_sizemask(t3, 3) | dh_sizemask(t4, 4) \
| dh_sizemask(t5, 5) | dh_sizemask(t6, 6) },
#define DEF_HELPER_FLAGS_7(NAME, FLAGS, ret, t1, t2, t3, t4, t5, t6, t7) \
{ .func = HELPER(NAME), .name = str(NAME), .flags = FLAGS, \
.sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
| dh_sizemask(t2, 2) | dh_sizemask(t3, 3) | dh_sizemask(t4, 4) \
| dh_sizemask(t5, 5) | dh_sizemask(t6, 6) | dh_sizemask(t7, 7) },
#include "helper.h"
#include "accel/tcg/tcg-runtime.h"
#undef str
#undef DEF_HELPER_FLAGS_0
#undef DEF_HELPER_FLAGS_1
#undef DEF_HELPER_FLAGS_2
#undef DEF_HELPER_FLAGS_3
#undef DEF_HELPER_FLAGS_4
#undef DEF_HELPER_FLAGS_5
#undef DEF_HELPER_FLAGS_6
#undef DEF_HELPER_FLAGS_7
#endif /* HELPER_TCG_H */

View File

@ -3,12 +3,11 @@
#ifndef HWADDR_H
#define HWADDR_H
#define HWADDR_BITS 64
/* hwaddr is the type of a physical address (its size can
be different from 'target_ulong'). */
#include "unicorn/platform.h"
typedef uint64_t hwaddr;
#define HWADDR_MAX UINT64_MAX
#define TARGET_FMT_plx "%016" PRIx64

View File

@ -24,13 +24,8 @@
#ifndef IOPORT_H
#define IOPORT_H
#include "qemu-common.h"
#include "qom/object.h"
#include "exec/memory.h"
typedef uint32_t pio_addr_t;
#define FMT_pioaddr PRIx32
#define MAX_IOPORTS (64 * 1024)
#define IOPORTS_MASK (MAX_IOPORTS - 1)
@ -45,15 +40,31 @@ typedef struct MemoryRegionPortio {
#define PORTIO_END_OF_LIST() { }
#ifndef CONFIG_USER_ONLY
extern const MemoryRegionOps unassigned_io_ops;
#endif
void cpu_outb(struct uc_struct *uc, pio_addr_t addr, uint8_t val);
void cpu_outw(struct uc_struct *uc, pio_addr_t addr, uint16_t val);
void cpu_outl(struct uc_struct *uc, pio_addr_t addr, uint32_t val);
uint8_t cpu_inb(struct uc_struct *uc, pio_addr_t addr);
uint16_t cpu_inw(struct uc_struct *uc, pio_addr_t addr);
uint32_t cpu_inl(struct uc_struct *uc, pio_addr_t addr);
void cpu_outb(struct uc_struct *uc, uint32_t addr, uint8_t val);
void cpu_outw(struct uc_struct *uc, uint32_t addr, uint16_t val);
void cpu_outl(struct uc_struct *uc, uint32_t addr, uint32_t val);
uint8_t cpu_inb(struct uc_struct *uc, uint32_t addr);
uint16_t cpu_inw(struct uc_struct *uc, uint32_t addr);
uint32_t cpu_inl(struct uc_struct *uc, uint32_t addr);
typedef struct PortioList {
const struct MemoryRegionPortio *ports;
struct MemoryRegion *address_space;
unsigned nr;
struct MemoryRegion **regions;
void *opaque;
const char *name;
} PortioList;
void portio_list_init(PortioList *piolist,
const struct MemoryRegionPortio *callbacks,
void *opaque, const char *name);
void portio_list_destroy(PortioList *piolist);
void portio_list_add(PortioList *piolist,
struct MemoryRegion *address_space,
uint32_t addr);
void portio_list_del(PortioList *piolist);
#endif /* IOPORT_H */

View File

@ -0,0 +1,71 @@
/*
* Memory transaction attributes
*
* Copyright (c) 2015 Linaro Limited.
*
* Authors:
* Peter Maydell <peter.maydell@linaro.org>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef MEMATTRS_H
#define MEMATTRS_H
/* Every memory transaction has associated with it a set of
* attributes. Some of these are generic (such as the ID of
* the bus master); some are specific to a particular kind of
* bus (such as the ARM Secure/NonSecure bit). We define them
* all as non-overlapping bitfields in a single struct to avoid
* confusion if different parts of QEMU used the same bit for
* different semantics.
*/
typedef struct MemTxAttrs {
/* Bus masters which don't specify any attributes will get this
* (via the MEMTXATTRS_UNSPECIFIED constant), so that we can
* distinguish "all attributes deliberately clear" from
* "didn't specify" if necessary.
*/
unsigned int unspecified:1;
/* ARM/AMBA: TrustZone Secure access
* x86: System Management Mode access
*/
unsigned int secure:1;
/* Memory access is usermode (unprivileged) */
unsigned int user:1;
/* Requester ID (for MSI for example) */
unsigned int requester_id:16;
/* Invert endianness for this page */
unsigned int byte_swap:1;
/*
* The following are target-specific page-table bits. These are not
* related to actual memory transactions at all. However, this structure
* is part of the tlb_fill interface, cached in the cputlb structure,
* and has unused bits. These fields will be read by target-specific
* helpers using env->iotlb[mmu_idx][tlb_index()].attrs.target_tlb_bitN.
*/
unsigned int target_tlb_bit0 : 1;
unsigned int target_tlb_bit1 : 1;
unsigned int target_tlb_bit2 : 1;
} MemTxAttrs;
/* Bus masters which don't specify any attributes will get this,
* which has all attribute bits clear except the topmost one
* (so that we can distinguish "all attributes deliberately clear"
* from "didn't specify" if necessary).
*/
#define MEMTXATTRS_UNSPECIFIED ((MemTxAttrs) { .unspecified = 1 })
/* New-style MMIO accessors can indicate that the transaction failed.
* A zero (MEMTX_OK) response means success; anything else is a failure
* of some kind. The memory subsystem will bitwise-OR together results
* if it is synthesizing an operation from multiple smaller accesses.
*/
#define MEMTX_OK 0
#define MEMTX_ERROR (1U << 0) /* device returned an error */
#define MEMTX_DECODE_ERROR (1U << 1) /* nothing at that address */
typedef uint32_t MemTxResult;
#endif

Some files were not shown because too many files have changed in this diff Show More