lib: update libmetal to release v2020.04.0

Origin:
	https://github.com/OpenAMP/libmetal

commit:
	f1e8154b6392838a7c68871ab03400112b1e59b7

Status:
	merge libmetal new version after removing useless dirs.

Release Description:
	https://github.com/OpenAMP/libmetal/releases/tag/v2020.04.0

Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
This commit is contained in:
Arnaud Pouliquen 2020-05-04 10:33:40 +02:00 committed by Kumar Gala
parent 60f40977ec
commit 3c3c9ec83b
72 changed files with 381 additions and 256 deletions

2
README
View File

@ -27,7 +27,7 @@ URL:
https://github.com/OpenAMP/libmetal
commit:
e3dfc2fe85e5ceb8b193c4cf559b17bbd53e8866
f1e8154b6392838a7c68871ab03400112b1e59b7
Maintained-by:
External

View File

@ -8,18 +8,15 @@ consult when they have a question about OpenAMP and to provide a a set of
names to be CC'd when submitting a patch.
## Project Administration
Wendy Liang <wendy.liang@xilinx.com>
Ed Mooring <ed.mooring@linaro.org>
Arnaud Pouliquen <arnaud.pouliquen@st.com>
### All patches CC here
open-amp@googlegroups.com
openamp-rp@lists.openampproject.org
## Machines
### Xilinx Platform - Zynq-7000
Wendy Liang <wendy.liang@xilinx.com>
Ed Mooring <ed.mooring@linaro.org>
### Xilinx Platform - Zynq UltraScale+ MPSoC
Wendy Liang <wendy.liang@xilinx.com>
Ed Mooring <ed.mooring@linaro.org>

View File

@ -8,6 +8,26 @@ and request memory across the following operating environments:
* RTOS (with and without virtual memory)
* Bare-metal environments
## Project configuration
The configuration phase begins when the user invokes CMake. CMake begins by processing the CMakeLists.txt file and the cmake directory.
Some cmake options are available to help user to customize the libmetal to their
own project.
* **WITH_DOC** (default ON): Build with documentation. Add -DWITH_DOC=OFF in
cmake command line to disable.
* **WITH_EXAMPLES** (default ON): Build with application exemples. Add
-DWITH_DOC=OFF in cmake command line to disable the option.
* **WITH_TESTS** (default ON): Build with application tests. Add -DWITH_DOC=OFF
in cmake command line to disable the option.
* **WITH_DEFAULT_LOGGER** (default ON): Build with default trace logger. Add
-DWITH_DEFAULT_LOGGER=OFF in cmake command line to disable the option.
* **WITH_SHARED_LIB** (default ON): Generate a shared library. Add
-DWITH_SHARED_LIB=OFF in cmake command line to disable the option.
* **WITH_STATIC_LIB** (default ON): Generate a static library. Add
-DWITH_STATIC_LIB=OFF in cmake command line to disable the option.
* **WITH_ZEPHYR** (default OFF): Build for Zephyr environment. Add
-DWITH_ZEPHYR=ON in cmake command line to enable the the option.
## Build Steps
### Building for Linux Host
@ -53,6 +73,12 @@ example toolchain file:
```
### Building for Zephyr
The [zephyr-libmetal](https://github.com/zephyrproject-rtos/libmetal)
implements the libmetal for the Zephyr project. It is mainly a fork of this repository, with some add-ons for integration in the Zephyr project.
Following instruction is only to be able to run test application on a QEMU running
a Zephyr environment.
As Zephyr uses CMake, we build libmetal library and test application as
targets of Zephyr CMake project. Here is how to build libmetal for Zephyr:
```
@ -218,3 +244,48 @@ libmetal sleep APIs provide getting delay execution implementation.
This API is for compiler dependent functions. For this release, there is only
a GCC implementation, and compiler specific code is limited to atomic
operations.
## How to contribute:
As an open-source project, we welcome and encourage the community to submit patches directly to the project. As a contributor you should be familiar with common developer tools such as Git and CMake, and platforms such as GitHub.
Then following points should be rescpected to facilitate the review process.
### Licencing
Code is contributed to OpenAMP under a number of licenses, but all code must be compatible with version the [BSD License](https://github.com/OpenAMP/libmetal/blob/master/LICENSE.md), which is the license covering the OpenAMP distribution as a whole. In practice, use the following tag instead of the full license text in the individual files:
```
SPDX-License-Identifier: BSD-3-Clause
```
### Signed-off-by
Commit message must contain Signed-off-by: line and your email must match the change authorship information. Make sure your .gitconfig is set up correctly:
```
git config --global user.name "first-name Last-Namer"
git config --global user.email "yourmail@company.com"
```
### gitlint
Before you submit a pull request to the project, verify your commit messages meet the requirements. The check can be performed locally using the the gitlint command.
Run gitlint locally in your tree and branch where your patches have been committed:
```gitlint```
Note, gitlint only checks HEAD (the most recent commit), so you should run it after each commit, or use the --commits option to specify a commit range covering all the development patches to be submitted.
### Code style
In general, follow the Linux kernel coding style, with the following exceptions:
* Use /** */ for doxygen comments that need to appear in the documentation.
The Linux kernel GPL-licensed tool checkpatch is used to check coding style conformity.Checkpatch is available in the scripts directory.
To check your \<n\> commits in your git branch:
```
./scripts/checkpatch.pl --strict -g HEAD-<n>
```
### Send a pull request
We use standard github mechanism for pull request. Please refer to github documentation for help.
## Communication and Collaboration
[Subscribe](https://lists.openampproject.org/mailman/listinfo/openamp-rp) to the OpenAMP mailing list(openamp-rp@lists.openampproject.org).
For more details on the framework please refer to the the [OpenAMP wiki](https://github.com/OpenAMP/open-amp/wiki).

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup Memory Allocation Interfaces
* @{ */
* @{
*/
/**
* @brief allocate requested memory size
@ -31,7 +32,7 @@ static inline void *metal_allocate_memory(unsigned int size);
/**
* @brief free the memory previously allocated
*
* @param[in] ptr pointer to memory
* @param[in] ptr pointer to memory
*/
static inline void metal_free_memory(void *ptr);

View File

@ -18,9 +18,9 @@
extern "C" {
#endif
/** \defgroup cache CACHE Interfaces
* @{ */
* @{
*/
/**
* @brief flush specified data cache

View File

@ -23,6 +23,7 @@ typedef short atomic_short;
typedef unsigned short atomic_ushort;
typedef int atomic_int;
typedef unsigned int atomic_uint;
typedef atomic_uint atomic_uintptr_t;
typedef long atomic_long;
typedef unsigned long atomic_ulong;
typedef long long atomic_llong;

View File

@ -20,7 +20,8 @@ extern "C" {
#endif
/** \defgroup condition Condition Variable Interfaces
* @{ */
* @{
*/
/** Opaque libmetal condition variable data structure. */
struct metal_condition;

View File

@ -45,8 +45,8 @@ int metal_bus_find(const char *name, struct metal_bus **result)
bus = metal_container_of(node, struct metal_bus, node);
if (strcmp(bus->name, name) == 0 && result) {
*result = bus;
return 0;
}
return 0;
}
}
return -ENOENT;
}
@ -106,9 +106,9 @@ int metal_generic_dev_open(struct metal_bus *bus, const char *dev_name,
metal_list_for_each(&_metal.common.generic_device_list, node) {
dev = metal_container_of(node, struct metal_device, node);
if (strcmp(dev->name, dev_name) == 0) {
*device = dev;
*device = dev;
return metal_generic_dev_sys_open(dev);
}
}
}
return -ENODEV;

View File

@ -23,7 +23,8 @@ extern "C" {
#endif
/** \defgroup device Bus Abstraction
* @{ */
* @{
*/
#ifndef METAL_MAX_DEVICE_REGIONS
#define METAL_MAX_DEVICE_REGIONS 32
@ -41,8 +42,8 @@ struct metal_bus_ops {
void (*dev_close)(struct metal_bus *bus,
struct metal_device *device);
void (*dev_irq_ack)(struct metal_bus *bus,
struct metal_device *device,
int irq);
struct metal_device *device,
int irq);
int (*dev_dma_map)(struct metal_bus *bus,
struct metal_device *device,
uint32_t dir,
@ -50,10 +51,10 @@ struct metal_bus_ops {
int nents_in,
struct metal_sg *sg_out);
void (*dev_dma_unmap)(struct metal_bus *bus,
struct metal_device *device,
uint32_t dir,
struct metal_sg *sg,
int nents);
struct metal_device *device,
uint32_t dir,
struct metal_sg *sg,
int nents);
};
/** Libmetal bus structure. */
@ -71,10 +72,10 @@ extern struct metal_bus metal_generic_bus;
struct metal_device {
const char *name; /**< Device name */
struct metal_bus *bus; /**< Bus that contains device */
unsigned num_regions; /**< Number of I/O regions in
device */
unsigned int num_regions; /**< Number of I/O regions in
device */
struct metal_io_region regions[METAL_MAX_DEVICE_REGIONS]; /**< Array of
I/O regions in device*/
I/O regions in device*/
struct metal_list node; /**< Node on bus' list of devices */
int irq_num; /**< Number of IRQs per device */
void *irq_info; /**< IRQ ID */
@ -143,7 +144,7 @@ extern void metal_device_close(struct metal_device *device);
* @return I/O accessor handle, or NULL on failure.
*/
static inline struct metal_io_region *
metal_device_io_region(struct metal_device *device, unsigned index)
metal_device_io_region(struct metal_device *device, unsigned int index)
{
return (index < device->num_regions
? &device->regions[index]

View File

@ -29,8 +29,7 @@ int metal_dma_map(struct metal_device *dev,
/* If it is device read, apply memory write fence. */
atomic_thread_fence(memory_order_release);
else
/* If it is device write or device r/w,
apply memory r/w fence. */
/* If it is device write or r/w, apply memory r/w fence. */
atomic_thread_fence(memory_order_acq_rel);
nents_out = dev->bus->ops.dev_dma_map(dev->bus,
dev, dir, sg_in, nents_in, sg_out);
@ -47,8 +46,7 @@ void metal_dma_unmap(struct metal_device *dev,
/* If it is device read, apply memory write fence. */
atomic_thread_fence(memory_order_release);
else
/* If it is device write or device r/w,
apply memory r/w fence. */
/*If it is device write or r/w, apply memory r/w fence */
atomic_thread_fence(memory_order_acq_rel);
if (!dev || !dev->bus->ops.dev_dma_unmap || !sg)

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup dma DMA Interfaces
* @{ */
* @{
*/
#include <stdint.h>
#include <metal/sys.h>
@ -66,9 +67,9 @@ int metal_dma_map(struct metal_device *dev,
* @param[in] nents number of sg list entries of DMA memory
*/
void metal_dma_unmap(struct metal_device *dev,
uint32_t dir,
struct metal_sg *sg,
int nents);
uint32_t dir,
struct metal_sg *sg,
int nents);
/** @} */

View File

@ -11,7 +11,7 @@
void metal_io_init(struct metal_io_region *io, void *virt,
const metal_phys_addr_t *physmap, size_t size,
unsigned page_shift, unsigned int mem_flags,
unsigned int page_shift, unsigned int mem_flags,
const struct metal_io_ops *ops)
{
const struct metal_io_ops nops = {

View File

@ -27,7 +27,8 @@ extern "C" {
#endif
/** \defgroup io IO Interfaces
* @{ */
* @{
*/
#ifdef __MICROBLAZE__
#define NO_ATOMIC_64_SUPPORT
@ -47,24 +48,23 @@ struct metal_io_ops {
memory_order order,
int width);
int (*block_read)(struct metal_io_region *io,
unsigned long offset,
void *restrict dst,
memory_order order,
int len);
unsigned long offset,
void *restrict dst,
memory_order order,
int len);
int (*block_write)(struct metal_io_region *io,
unsigned long offset,
const void *restrict src,
memory_order order,
int len);
unsigned long offset,
const void *restrict src,
memory_order order,
int len);
void (*block_set)(struct metal_io_region *io,
unsigned long offset,
unsigned char value,
memory_order order,
int len);
unsigned long offset,
unsigned char value,
memory_order order,
int len);
void (*close)(struct metal_io_region *io);
metal_phys_addr_t
(*offset_to_phys)(struct metal_io_region *io,
unsigned long offset);
metal_phys_addr_t (*offset_to_phys)(struct metal_io_region *io,
unsigned long offset);
unsigned long (*phys_to_offset)(struct metal_io_region *io,
metal_phys_addr_t phys);
};
@ -73,8 +73,8 @@ struct metal_io_ops {
struct metal_io_region {
void *virt; /**< base virtual address */
const metal_phys_addr_t *physmap; /**< table of base physical address
of each of the pages in the I/O
region */
of each of the pages in the I/O
region */
size_t size; /**< size of the I/O region */
unsigned long page_shift; /**< page shift of I/O region */
metal_phys_addr_t page_mask; /**< page mask of I/O region */
@ -97,7 +97,7 @@ struct metal_io_region {
void
metal_io_init(struct metal_io_region *io, void *virt,
const metal_phys_addr_t *physmap, size_t size,
unsigned page_shift, unsigned int mem_flags,
unsigned int page_shift, unsigned int mem_flags,
const struct metal_io_ops *ops);
/**
@ -146,6 +146,7 @@ static inline unsigned long
metal_io_virt_to_offset(struct metal_io_region *io, void *virt)
{
size_t offset = (uint8_t *)virt - (uint8_t *)io->virt;
return (offset < io->size ? offset : METAL_BAD_OFFSET);
}
@ -163,7 +164,7 @@ metal_io_phys(struct metal_io_region *io, unsigned long offset)
unsigned long page = (io->page_shift >=
sizeof(offset) * CHAR_BIT ?
0 : offset >> io->page_shift);
return (io->physmap != NULL && offset < io->size
return (io->physmap && offset < io->size
? io->physmap[page] + (offset & io->page_mask)
: METAL_BAD_PHYS);
}
@ -269,6 +270,7 @@ metal_io_write(struct metal_io_region *io, unsigned long offset,
uint64_t value, memory_order order, int width)
{
void *ptr = metal_io_virt(io, offset);
if (io->ops.write)
(*io->ops.write)(io, offset, value, order, width);
else if (ptr && sizeof(atomic_uchar) == width)
@ -284,7 +286,7 @@ metal_io_write(struct metal_io_region *io, unsigned long offset,
atomic_store_explicit((atomic_ullong *)ptr, value, order);
#endif
else
metal_assert (0);
metal_assert(0);
}
#define metal_io_read8_explicit(_io, _ofs, _order) \
@ -332,7 +334,7 @@ metal_io_write(struct metal_io_region *io, unsigned long offset,
* @return On success, number of bytes read. On failure, negative value
*/
int metal_io_block_read(struct metal_io_region *io, unsigned long offset,
void *restrict dst, int len);
void *restrict dst, int len);
/**
* @brief Write a block into an I/O region.
@ -343,7 +345,7 @@ int metal_io_block_read(struct metal_io_region *io, unsigned long offset,
* @return On success, number of bytes written. On failure, negative value
*/
int metal_io_block_write(struct metal_io_region *io, unsigned long offset,
const void *restrict src, int len);
const void *restrict src, int len);
/**
* @brief fill a block of an I/O region.
@ -354,7 +356,7 @@ int metal_io_block_write(struct metal_io_region *io, unsigned long offset,
* @return On success, number of bytes filled. On failure, negative value
*/
int metal_io_block_set(struct metal_io_region *io, unsigned long offset,
unsigned char value, int len);
unsigned char value, int len);
#include <metal/system/@PROJECT_SYSTEM@/io.h>

View File

@ -61,9 +61,11 @@ int metal_irq_register_controller(struct metal_irq_controller *cntr)
}
}
/* Allocate IRQ numbers which are not yet used by any IRQ
* controllers.*/
irq_base = metal_irq_allocate(cntr->irq_base , cntr->irq_num);
/*
* Allocate IRQ numbers which are not yet used by any IRQ
* controllers.
*/
irq_base = metal_irq_allocate(cntr->irq_base, cntr->irq_num);
if (irq_base == METAL_IRQ_ANY) {
return -EINVAL;
}
@ -83,11 +85,11 @@ static struct metal_irq_controller *metal_irq_get_controller(int irq)
cntr = (struct metal_irq_controller *)
metal_container_of(node, struct metal_irq_controller,
node);
node);
irq_base = cntr->irq_base;
irq_end = irq_base + cntr->irq_num;
if (irq >= irq_base && irq < irq_end) {
return cntr;
return cntr;
}
}
return NULL;

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup irq Interrupt Handling Interfaces
* @{ */
* @{
*/
#include <metal/list.h>
#include <stdlib.h>

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup irq Interrupt Handling Interfaces
* @{ */
* @{
*/
#include <metal/irq.h>
#include <metal/list.h>
@ -64,12 +65,14 @@ struct metal_irq {
/** Libmetal interrupt controller structure */
struct metal_irq_controller {
int irq_base; /**< Start of IRQ number of the range managed by
the IRQ controller */
* the IRQ controller
*/
int irq_num; /**< Number of IRQs managed by the IRQ controller */
void *arg; /**< Argument to pass to interrupt controller function */
metal_irq_set_enable irq_set_enable; /**< function to set IRQ eanble */
metal_cntr_irq_register irq_register; /**< function to register IRQ
handler */
* handler
*/
struct metal_list node; /**< list node */
struct metal_irq *irqs; /**< Array of IRQs managed by the controller */
};
@ -87,7 +90,7 @@ struct metal_irq_controller {
.irq_set_enable = _irq_set_enable, \
.irq_register = _irq_register, \
.irqs = _irqs,\
};
}
/**
* @brief metal_irq_register_controller
@ -115,7 +118,7 @@ int metal_irq_register_controller(struct metal_irq_controller *cntr);
static inline
int metal_irq_handle(struct metal_irq *irq_data, int irq)
{
if (irq_data != NULL && irq_data->hd != NULL) {
if (irq_data && irq_data->hd) {
return irq_data->hd(irq, irq_data->arg);
} else {
return METAL_IRQ_NOT_HANDLED;

View File

@ -19,7 +19,8 @@ extern "C" {
#endif
/** \defgroup list List Primitives
* @{ */
* @{
*/
struct metal_list {
struct metal_list *next, *prev;
@ -39,7 +40,8 @@ struct metal_list {
static inline void metal_list_init(struct metal_list *list)
{
list->next = list->prev = list;
list->prev = list;
list->next = list;
}
static inline void metal_list_add_before(struct metal_list *node,
@ -81,7 +83,8 @@ static inline void metal_list_del(struct metal_list *node)
{
node->next->prev = node->prev;
node->prev->next = node->next;
node->next = node->prev = node;
node->prev = node;
node->next = node;
}
static inline struct metal_list *metal_list_first(struct metal_list *list)

View File

@ -16,7 +16,7 @@ void metal_default_log_handler(enum metal_log_level level,
#ifdef DEFAULT_LOGGER_ON
char msg[1024];
va_list args;
static const char *level_strs[] = {
static const char * const level_strs[] = {
"metal: emergency: ",
"metal: alert: ",
"metal: critical: ",

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup logging Library Logging Interfaces
* @{ */
* @{
*/
/** Log message priority levels for libmetal. */
enum metal_log_level {
@ -70,7 +71,6 @@ extern enum metal_log_level metal_get_log_level(void);
extern void metal_default_log_handler(enum metal_log_level level,
const char *format, ...);
/**
* Emit a log message if the log level permits.
*

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup mutex Mutex Interfaces
* @{ */
* @{
*/
#include <metal/system/@PROJECT_SYSTEM@/mutex.h>
@ -40,7 +41,7 @@ static inline void metal_mutex_deinit(metal_mutex_t *mutex)
}
/**
* @brief Try to acquire a mutex
* @brief Try to acquire a mutex
* @param[in] mutex Mutex to mutex.
* @return 0 on failure to acquire, non-zero on success.
*/
@ -50,7 +51,7 @@ static inline int metal_mutex_try_acquire(metal_mutex_t *mutex)
}
/**
* @brief Acquire a mutex
* @brief Acquire a mutex
* @param[in] mutex Mutex to mutex.
*/
static inline void metal_mutex_acquire(metal_mutex_t *mutex)

View File

@ -19,7 +19,8 @@ extern "C" {
#endif
/** \defgroup shmem Shared Memory Interfaces
* @{ */
* @{
*/
/** Generic shared memory data structure. */
struct metal_generic_shmem {

View File

@ -19,7 +19,8 @@ extern "C" {
#endif
/** \defgroup sleep Sleep Interfaces
* @{ */
* @{
*/
/**
* @brief delay in microseconds

View File

@ -20,9 +20,9 @@
static const int metal_softirq_num = num; \
static struct metal_irq metal_softirqs[num]; \
static atomic_char metal_softirq_pending[num]; \
static atomic_char metal_softirq_enabled[num]; \
static atomic_char metal_softirq_enabled[num];
static int metal_softirq_avail = 0;
static int metal_softirq_avail;
METAL_SOFTIRQ_ARRAY_DECLARE(METAL_SOFTIRQ_NUM)
static void metal_softirq_set_enable(struct metal_irq_controller *cntr,
@ -45,7 +45,7 @@ static METAL_IRQ_CONTROLLER_DECLARE(metal_softirq_cntr,
METAL_IRQ_ANY, METAL_SOFTIRQ_NUM,
NULL,
metal_softirq_set_enable, NULL,
metal_softirqs)
metal_softirqs);
void metal_softirq_set(int irq)
{
@ -62,7 +62,7 @@ void metal_softirq_set(int irq)
atomic_store(&metal_softirq_pending[irq], 1);
}
int metal_softirq_init()
int metal_softirq_init(void)
{
return metal_irq_register_controller(&metal_softirq_cntr);
}
@ -82,7 +82,7 @@ int metal_softirq_allocate(int num)
return irq_base;
}
void metal_softirq_dispatch()
void metal_softirq_dispatch(void)
{
int i;

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup soft irq Interrupt Handling Interfaces
* @{ */
* @{
*/
#include <metal/irq.h>
@ -28,14 +29,14 @@ extern "C" {
*
* @return 0 on success, or negative value for failure
*/
int metal_softirq_init();
int metal_softirq_init(void);
/**
* @brief metal_softirq_dispatch
*
* Dispatch the pending soft IRQs
*/
void metal_softirq_dispatch();
void metal_softirq_dispatch(void);
/**
* @brief metal_softirq_allocate

View File

@ -20,7 +20,9 @@ extern "C" {
#endif
/** \defgroup spinlock Spinlock Interfaces
* @{ */
* @{
*/
struct metal_spinlock {
atomic_flag v;
};

View File

@ -23,7 +23,8 @@ extern "C" {
#endif
/** \defgroup system Top Level Interfaces
* @{ */
* @{
*/
/** Physical address type. */
typedef unsigned long metal_phys_addr_t;

View File

@ -24,7 +24,7 @@ extern "C" {
static inline void *metal_allocate_memory(unsigned int size)
{
return (pvPortMalloc(size));
return pvPortMalloc(size);
}
static inline void metal_free_memory(void *ptr)

View File

@ -24,10 +24,10 @@ extern "C" {
struct metal_condition {
metal_mutex_t *m; /**< mutex.
The condition variable is attached to
this mutex when it is waiting.
It is also used to check correctness
in case there are multiple waiters. */
* The condition variable is attached to this mutex
* when it is waiting. It is also used to check
* correctness in case there are multiple waiters.
*/
atomic_int v; /**< condition variable value. */
};
@ -39,7 +39,6 @@ static inline void metal_condition_init(struct metal_condition *cv)
{
/* TODO: Implement condition variable for FreeRTOS */
(void)cv;
return;
}
static inline int metal_condition_signal(struct metal_condition *cv)
@ -56,7 +55,6 @@ static inline int metal_condition_broadcast(struct metal_condition *cv)
return 0;
}
#ifdef __cplusplus
}
#endif

View File

@ -16,7 +16,7 @@
int metal_generic_dev_sys_open(struct metal_device *dev)
{
struct metal_io_region *io;
unsigned i;
unsigned int i;
/* map I/O memory regions */
for (i = 0; i < dev->num_regions; i++) {

View File

@ -26,6 +26,7 @@ extern "C" {
static inline int __metal_sleep_usec(unsigned int usec)
{
const TickType_t xDelay = usec / portTICK_PERIOD_MS;
vTaskDelay(xDelay);
return 0;
}

View File

@ -42,7 +42,7 @@ static METAL_IRQ_CONTROLLER_DECLARE(xlnx_irq_cntr,
0, MAX_IRQS,
NULL,
metal_xlnx_irq_set_enable, NULL,
irqs)
irqs);
/**
* @brief default handler

View File

@ -20,13 +20,13 @@
#include "xscugic.h"
/* Translation table is 16K in size */
#define ARM_AR_MEM_TTB_SIZE 16*1024
#define ARM_AR_MEM_TTB_SIZE (16*1024)
/* Each TTB descriptor covers a 1MB region */
#define ARM_AR_MEM_TTB_SECT_SIZE 1024*1024
#define ARM_AR_MEM_TTB_SECT_SIZE (1024*1024)
/* Mask off lower bits of addr */
#define ARM_AR_MEM_TTB_SECT_SIZE_MASK (~(ARM_AR_MEM_TTB_SECT_SIZE-1UL))
#define ARM_AR_MEM_TTB_SECT_SIZE_MASK (~(ARM_AR_MEM_TTB_SECT_SIZE-1UL))
void sys_irq_restore_enable(unsigned int flags)
{
@ -37,7 +37,7 @@ unsigned int sys_irq_save_disable(void)
{
unsigned int state = mfcpsr() & XIL_EXCEPTION_ALL;
if (XIL_EXCEPTION_ALL != state) {
if (state != XIL_EXCEPTION_ALL) {
Xil_ExceptionDisableMask(XIL_EXCEPTION_ALL);
}
return state;
@ -75,12 +75,16 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
if (!flags)
return va;
/* Ensure the virtual and physical addresses are aligned on a
section boundary */
/*
* Ensure the virtual and physical addresses are aligned on a
* section boundary
*/
pa &= ARM_AR_MEM_TTB_SECT_SIZE_MASK;
/* Loop through entire region of memory (one MMU section at a time).
Each section requires a TTB entry. */
/*
* Loop through entire region of memory (one MMU section at a time).
* Each section requires a TTB entry.
*/
for (section_offset = 0; section_offset < size;
section_offset += ARM_AR_MEM_TTB_SECT_SIZE) {

View File

@ -29,12 +29,12 @@ extern "C" {
static inline void sys_irq_enable(unsigned int vector)
{
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
static inline void sys_irq_disable(unsigned int vector)
{
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
#endif /* METAL_INTERNAL */

View File

@ -31,7 +31,7 @@ unsigned int sys_irq_save_disable(void)
{
unsigned int state = mfcpsr() & XIL_EXCEPTION_ALL;
if (XIL_EXCEPTION_ALL != state) {
if (state != XIL_EXCEPTION_ALL) {
Xil_ExceptionDisableMask(XIL_EXCEPTION_ALL);
}
return state;
@ -66,7 +66,7 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
{
unsigned long section_offset;
unsigned long ttb_addr;
#if defined (__aarch64__)
#if defined(__aarch64__)
unsigned long ttb_size = (pa < 4*GB) ? 2*MB : 1*GB;
#else
unsigned long ttb_size = 1*MB;
@ -78,8 +78,10 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
/* Ensure alignement on a section boundary */
pa &= ~(ttb_size-1UL);
/* Loop through entire region of memory (one MMU section at a time).
Each section requires a TTB entry. */
/*
* Loop through entire region of memory (one MMU section at a time).
* Each section requires a TTB entry.
*/
for (section_offset = 0; section_offset < size; ) {
/* Calculate translation table entry for this memory section */
ttb_addr = (pa + section_offset);
@ -87,9 +89,12 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
/* Write translation table entry value to entry address */
Xil_SetTlbAttributes(ttb_addr, flags);
#if defined (__aarch64__)
/* recalculate if we started below 4GB and going above in 64bit mode */
if ( ttb_addr >= 4*GB ) {
#if defined(__aarch64__)
/*
* recalculate if we started below 4GB and going above in
* 64bit mode
*/
if (ttb_addr >= 4*GB) {
ttb_size = 1*GB;
}
#endif

View File

@ -29,12 +29,12 @@ extern "C" {
static inline void sys_irq_enable(unsigned int vector)
{
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
static inline void sys_irq_disable(unsigned int vector)
{
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
#endif /* METAL_INTERNAL */

View File

@ -31,7 +31,7 @@ unsigned int sys_irq_save_disable(void)
{
unsigned int state = mfcpsr() & XIL_EXCEPTION_ALL;
if (XIL_EXCEPTION_ALL != state) {
if (state != XIL_EXCEPTION_ALL) {
Xil_ExceptionDisableMask(XIL_EXCEPTION_ALL);
}
return state;
@ -69,7 +69,7 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
if (!flags)
return va;
while(1) {
while (1) {
if (rsize < size) {
rsize <<= 1;
continue;

View File

@ -29,12 +29,12 @@ extern "C" {
static inline void sys_irq_enable(unsigned int vector)
{
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
static inline void sys_irq_disable(unsigned int vector)
{
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
#endif /* METAL_INTERNAL */

View File

@ -24,7 +24,7 @@ extern "C" {
static inline void *metal_allocate_memory(unsigned int size)
{
return (malloc(size));
return malloc(size);
}
static inline void metal_free_memory(void *ptr)

View File

@ -42,7 +42,7 @@ int metal_condition_wait(struct metal_condition *cv,
}
metal_generic_default_poll();
metal_irq_restore_enable(flags);
} while(1);
} while (1);
/* Acquire the mutex again. */
metal_mutex_acquire(m);
return 0;

View File

@ -28,10 +28,11 @@ extern "C" {
struct metal_condition {
atomic_uintptr_t mptr; /**< mutex pointer.
The condition variable is attached to
this mutex when it is waiting.
It is also used to check correctness
in case there are multiple waiters. */
* The condition variable is attached to
* this mutex when it is waiting.
* It is also used to check correctness
* in case there are multiple waiters.
*/
atomic_int v; /**< condition variable value. */
};

View File

@ -17,7 +17,7 @@
int metal_generic_dev_sys_open(struct metal_device *dev)
{
struct metal_io_region *io;
unsigned i;
unsigned int i;
/* map I/O memory regions */
for (i = 0; i < dev->num_regions; i++) {

View File

@ -39,6 +39,7 @@ unsigned int sys_irq_save_disable(void)
void sys_irq_restore_enable(unsigned int flags)
{
unsigned int tmp;
if (flags)
asm volatile(" msrset %0, %1 \n"
: "=r"(tmp)
@ -113,20 +114,18 @@ void metal_weak sys_irq_disable(unsigned int vector)
void metal_machine_cache_flush(void *addr, unsigned int len)
{
if (!addr && !len){
if (!addr && !len) {
Xil_DCacheFlush();
}
else{
} else{
Xil_DCacheFlushRange((intptr_t)addr, len);
}
}
void metal_machine_cache_invalidate(void *addr, unsigned int len)
{
if (!addr && !len){
if (!addr && !len) {
Xil_DCacheInvalidate();
}
else {
} else {
Xil_DCacheInvalidateRange((intptr_t)addr, len);
}
}

View File

@ -42,7 +42,7 @@ static METAL_IRQ_CONTROLLER_DECLARE(xlnx_irq_cntr,
0, MAX_IRQS,
NULL,
metal_xlnx_irq_set_enable, NULL,
irqs)
irqs);
/**
* @brief default handler

View File

@ -20,10 +20,10 @@
#include "xscugic.h"
/* Each TTB descriptor covers a 1MB region */
#define ARM_AR_MEM_TTB_SECT_SIZE 1024*1024
#define ARM_AR_MEM_TTB_SECT_SIZE (1024*1024)
/* Mask off lower bits of addr */
#define ARM_AR_MEM_TTB_SECT_SIZE_MASK (~(ARM_AR_MEM_TTB_SECT_SIZE-1UL))
#define ARM_AR_MEM_TTB_SECT_SIZE_MASK (~(ARM_AR_MEM_TTB_SECT_SIZE-1UL))
void sys_irq_restore_enable(unsigned int flags)
{
@ -34,7 +34,7 @@ unsigned int sys_irq_save_disable(void)
{
unsigned int state = mfcpsr() & XIL_EXCEPTION_ALL;
if (XIL_EXCEPTION_ALL != state) {
if (state != XIL_EXCEPTION_ALL) {
Xil_ExceptionDisableMask(XIL_EXCEPTION_ALL);
}
return state;
@ -72,12 +72,16 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
if (!flags)
return va;
/* Ensure the virtual and physical addresses are aligned on a
section boundary */
/*
* Ensure the virtual and physical addresses are aligned on a
* section boundary
*/
pa &= ARM_AR_MEM_TTB_SECT_SIZE_MASK;
/* Loop through entire region of memory (one MMU section at a time).
Each section requires a TTB entry. */
/*
* Loop through entire region of memory (one MMU section at a time).
* Each section requires a TTB entry.
*/
for (section_offset = 0; section_offset < size;
section_offset += ARM_AR_MEM_TTB_SECT_SIZE) {

View File

@ -29,12 +29,12 @@ extern "C" {
static inline void sys_irq_enable(unsigned int vector)
{
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
static inline void sys_irq_disable(unsigned int vector)
{
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
#endif /* METAL_INTERNAL */

View File

@ -31,7 +31,7 @@ unsigned int sys_irq_save_disable(void)
{
unsigned int state = mfcpsr() & XIL_EXCEPTION_ALL;
if (XIL_EXCEPTION_ALL != state) {
if (state != XIL_EXCEPTION_ALL) {
Xil_ExceptionDisableMask(XIL_EXCEPTION_ALL);
}
return state;
@ -66,7 +66,7 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
{
unsigned long section_offset;
unsigned long ttb_addr;
#if defined (__aarch64__)
#if defined(__aarch64__)
unsigned long ttb_size = (pa < 4*GB) ? 2*MB : 1*GB;
#else
unsigned long ttb_size = 1*MB;
@ -78,8 +78,10 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
/* Ensure alignement on a section boundary */
pa &= ~(ttb_size-1UL);
/* Loop through entire region of memory (one MMU section at a time).
Each section requires a TTB entry. */
/*
* Loop through entire region of memory (one MMU section at a time).
* Each section requires a TTB entry.
*/
for (section_offset = 0; section_offset < size; ) {
/* Calculate translation table entry for this memory section */
ttb_addr = (pa + section_offset);
@ -87,9 +89,12 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
/* Write translation table entry value to entry address */
Xil_SetTlbAttributes(ttb_addr, flags);
#if defined (__aarch64__)
/* recalculate if we started below 4GB and going above in 64bit mode */
if ( ttb_addr >= 4*GB ) {
#if defined(__aarch64__)
/*
* recalculate if we started below 4GB and going above in
* 64bit mode
*/
if (ttb_addr >= 4*GB) {
ttb_size = 1*GB;
}
#endif

View File

@ -29,12 +29,12 @@ extern "C" {
static inline void sys_irq_enable(unsigned int vector)
{
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
static inline void sys_irq_disable(unsigned int vector)
{
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
#endif /* METAL_INTERNAL */

View File

@ -31,7 +31,7 @@ unsigned int sys_irq_save_disable(void)
{
unsigned int state = mfcpsr() & XIL_EXCEPTION_ALL;
if (XIL_EXCEPTION_ALL != state) {
if (state != XIL_EXCEPTION_ALL) {
Xil_ExceptionDisableMask(XIL_EXCEPTION_ALL);
}
return state;
@ -69,7 +69,7 @@ void *metal_machine_io_mem_map(void *va, metal_phys_addr_t pa,
if (!flags)
return va;
while(1) {
while (1) {
if (rsize < size) {
rsize <<= 1;
continue;

View File

@ -29,12 +29,12 @@ extern "C" {
static inline void sys_irq_enable(unsigned int vector)
{
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_EnableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
static inline void sys_irq_disable(unsigned int vector)
{
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
XScuGic_DisableIntr(XPAR_SCUGIC_0_DIST_BASEADDR, vector);
}
#endif /* METAL_INTERNAL */

View File

@ -24,7 +24,7 @@ extern "C" {
static inline void *metal_allocate_memory(unsigned int size)
{
return (malloc(size));
return malloc(size);
}
static inline void metal_free_memory(void *ptr)

View File

@ -29,7 +29,6 @@ static inline void __metal_cache_flush(void *addr, unsigned int len)
*/
metal_unused(addr);
metal_unused(len);
return;
}
static inline void __metal_cache_invalidate(void *addr, unsigned int len)
@ -39,7 +38,6 @@ static inline void __metal_cache_invalidate(void *addr, unsigned int len)
*/
metal_unused(addr);
metal_unused(len);
return;
}
#ifdef __cplusplus

View File

@ -30,11 +30,11 @@ extern "C" {
struct metal_condition {
atomic_uintptr_t mptr; /**< mutex pointer.
The condition variable is attached to
this mutex when it is waiting.
It is also used to check correctness
in case there are multiple waiters. */
* The condition variable is attached to
* this mutex when it is waiting.
* It is also used to check correctness
* in case there are multiple waiters.
*/
atomic_int waiters; /**< number of waiters. */
atomic_int wakeups; /**< number of wakeups. */
};

View File

@ -73,8 +73,10 @@ static struct linux_device *to_linux_device(struct metal_device *device)
return metal_container_of(device, struct linux_device, device);
}
static int metal_uio_read_map_attr(struct linux_device *ldev, unsigned index,
const char *name, unsigned long *value)
static int metal_uio_read_map_attr(struct linux_device *ldev,
unsigned int index,
const char *name,
unsigned long *value)
{
const char *cls = ldev->cls_path;
struct sysfs_attribute *attr;
@ -147,7 +149,7 @@ static int metal_uio_dev_open(struct linux_bus *lbus, struct linux_device *ldev)
{
char *instance, path[SYSFS_PATH_MAX];
struct linux_driver *ldrv = ldev->ldrv;
unsigned long *phys, offset=0, size=0;
unsigned long *phys, offset = 0, size = 0;
struct metal_io_region *io;
struct dlist *dlist;
int result, i;
@ -337,17 +339,16 @@ static int metal_uio_dev_dma_map(struct linux_bus *lbus,
}
static void metal_uio_dev_dma_unmap(struct linux_bus *lbus,
struct linux_device *ldev,
uint32_t dir,
struct metal_sg *sg,
int nents)
struct linux_device *ldev,
uint32_t dir,
struct metal_sg *sg,
int nents)
{
(void) lbus;
(void) ldev;
(void) dir;
(void) sg;
(void) nents;
return;
}
static struct linux_bus linux_bus[] = {
@ -512,10 +513,10 @@ static int metal_linux_dev_dma_map(struct metal_bus *bus,
}
static void metal_linux_dev_dma_unmap(struct metal_bus *bus,
struct metal_device *device,
uint32_t dir,
struct metal_sg *sg,
int nents)
struct metal_device *device,
uint32_t dir,
struct metal_sg *sg,
int nents)
{
struct linux_device *ldev = to_linux_device(device);
struct linux_bus *lbus = to_linux_bus(bus);

View File

@ -16,14 +16,15 @@
struct metal_state _metal;
extern int metal_linux_irq_init();
extern void metal_linux_irq_shutdown();
extern int metal_linux_irq_init(void);
extern void metal_linux_irq_shutdown(void);
/** Sort function for page size array. */
static int metal_pagesize_compare(const void *_a, const void *_b)
{
const struct metal_page_size *a = _a, *b = _b;
long diff = a->page_size - b->page_size;
return metal_sign(diff);
}
@ -48,7 +49,7 @@ static int metal_add_page_size(const char *path, int shift, int mmap_flags)
_metal.page_sizes[index].page_size = size;
_metal.page_sizes[index].mmap_flags = mmap_flags;
strncpy(_metal.page_sizes[index].path, path, PATH_MAX);
_metal.num_page_sizes ++;
_metal.num_page_sizes++;
metal_log(METAL_LOG_DEBUG, "added page size %ld @%s\n", size, path);
@ -87,6 +88,7 @@ static int metal_init_page_sizes(void)
count = gethugepagesizes(sizes, max_sizes);
for (i = 0; i < count; i++) {
int shift = metal_log2(sizes[i]);
if ((shift & MAP_HUGE_MASK) != shift)
continue;
metal_add_page_size(
@ -110,7 +112,7 @@ int metal_sys_init(const struct metal_init_params *params)
static char sysfs_path[SYSFS_PATH_MAX];
const char *tmp_path;
unsigned int seed;
FILE* urandom;
FILE *urandom;
int result;
/* Determine sysfs mount point. */

View File

@ -32,7 +32,8 @@
static struct metal_device *irqs_devs[MAX_IRQS]; /**< Linux devices for IRQs */
static int irq_notify_fd; /**< irq handling state change notification file
descriptor */
* descriptor
*/
static metal_mutex_t irq_lock; /**< irq handling lock */
static bool irq_handling_stop; /**< stop interrupts handling */
@ -54,9 +55,9 @@ static METAL_IRQ_CONTROLLER_DECLARE(linux_irq_cntr,
0, MAX_IRQS,
NULL,
metal_linux_irq_set_enable, NULL,
irqs)
irqs);
unsigned int metal_irq_save_disable()
unsigned int metal_irq_save_disable(void)
{
/* This is to avoid deadlock if it is called in ISR */
if (pthread_self() == irq_pthread)
@ -65,14 +66,14 @@ unsigned int metal_irq_save_disable()
return 0;
}
void metal_irq_restore_enable(unsigned flags)
void metal_irq_restore_enable(unsigned int flags)
{
(void)flags;
if (pthread_self() != irq_pthread)
metal_mutex_release(&irq_lock);
}
static int metal_linux_irq_notify()
static int metal_linux_irq_notify(void)
{
uint64_t val = 1;
int ret;
@ -105,15 +106,16 @@ static void metal_linux_irq_set_enable(struct metal_irq_controller *irq_cntr,
/* Notify IRQ thread that IRQ state has changed */
ret = metal_linux_irq_notify();
if (ret < 0) {
metal_log(METAL_LOG_ERROR, "%s: failed to notify set %d enable\n",
metal_log(METAL_LOG_ERROR,
"%s: failed to notify set %d enable\n",
__func__, irq);
}
}
/**
* @brief IRQ handler
* @param[in] args not used. required for pthread.
*/
* @brief IRQ handler
* @param[in] args not used. required for pthread.
*/
static void *metal_linux_irq_handling(void *args)
{
struct sched_param param;
@ -122,12 +124,12 @@ static void *metal_linux_irq_handling(void *args)
int i, j, pfds_total;
struct pollfd *pfds;
(void) args;
(void)args;
pfds = (struct pollfd *)malloc(FD_SETSIZE * sizeof(struct pollfd));
if (!pfds) {
metal_log(METAL_LOG_ERROR, "%s: failed to allocate irq fds mem.\n",
__func__);
metal_log(METAL_LOG_ERROR,
"%s: failed to allocate irq fds mem.\n", __func__);
return NULL;
}
@ -135,13 +137,14 @@ static void *metal_linux_irq_handling(void *args)
/* Ignore the set scheduler error */
ret = sched_setscheduler(0, SCHED_FIFO, &param);
if (ret) {
metal_log(METAL_LOG_WARNING, "%s: Failed to set scheduler: %s.\n",
__func__, strerror(ret));
metal_log(METAL_LOG_WARNING,
"%s: Failed to set scheduler: %s.\n", __func__,
strerror(ret));
}
while (1) {
metal_mutex_acquire(&irq_lock);
if (irq_handling_stop == true) {
if (irq_handling_stop) {
/* Killing this IRQ handling thread */
metal_mutex_release(&irq_lock);
break;
@ -169,12 +172,13 @@ static void *metal_linux_irq_handling(void *args)
/* Waken up from interrupt */
pfds_total = j;
for (i = 0; i < pfds_total; i++) {
if ( (pfds[i].fd == irq_notify_fd) &&
(pfds[i].revents & (POLLIN | POLLRDNORM))) {
if ((pfds[i].fd == irq_notify_fd) &&
(pfds[i].revents & (POLLIN | POLLRDNORM))) {
/* IRQ registration change notification */
if (read(pfds[i].fd, (void*)&val, sizeof(uint64_t)) < 0)
if (read(pfds[i].fd,
(void *)&val, sizeof(uint64_t)) < 0)
metal_log(METAL_LOG_ERROR,
"%s, read irq fd %d failed.\n",
"%s, read irq fd %d failed\n",
__func__, pfds[i].fd);
} else if ((pfds[i].revents & (POLLIN | POLLRDNORM))) {
struct metal_device *dev = NULL;
@ -189,13 +193,15 @@ static void *metal_linux_irq_handling(void *args)
irq_handled = 1;
if (irq_handled) {
if (dev && dev->bus->ops.dev_irq_ack)
dev->bus->ops.dev_irq_ack(dev->bus, dev, fd);
dev->bus->ops.dev_irq_ack(
dev->bus, dev, fd);
}
metal_mutex_release(&irq_lock);
} else if (pfds[i].revents) {
metal_log(METAL_LOG_DEBUG,
"%s: poll unexpected. fd %d: %d\n",
__func__, pfds[i].fd, pfds[i].revents);
__func__,
pfds[i].fd, pfds[i].revents);
}
}
}
@ -204,18 +210,19 @@ static void *metal_linux_irq_handling(void *args)
}
/**
* @brief irq handling initialization
* @return 0 on sucess, non-zero on failure
*/
int metal_linux_irq_init()
* @brief irq handling initialization
* @return 0 on success, non-zero on failure
*/
int metal_linux_irq_init(void)
{
int ret;
memset(&irqs, 0, sizeof(irqs));
irq_notify_fd = eventfd(0,0);
irq_notify_fd = eventfd(0, 0);
if (irq_notify_fd < 0) {
metal_log(METAL_LOG_ERROR, "Failed to create eventfd for IRQ handling.\n");
metal_log(METAL_LOG_ERROR,
"Failed to create eventfd for IRQ handling.\n");
return -EAGAIN;
}
@ -228,9 +235,10 @@ int metal_linux_irq_init()
return -EINVAL;
}
ret = pthread_create(&irq_pthread, NULL,
metal_linux_irq_handling, NULL);
metal_linux_irq_handling, NULL);
if (ret != 0) {
metal_log(METAL_LOG_ERROR, "Failed to create IRQ thread: %d.\n", ret);
metal_log(METAL_LOG_ERROR, "Failed to create IRQ thread: %d.\n",
ret);
return -EAGAIN;
}
@ -238,9 +246,9 @@ int metal_linux_irq_init()
}
/**
* @brief irq handling shutdown
*/
void metal_linux_irq_shutdown()
* @brief irq handling shutdown
*/
void metal_linux_irq_shutdown(void)
{
int ret;
@ -249,7 +257,8 @@ void metal_linux_irq_shutdown()
metal_linux_irq_notify();
ret = pthread_join(irq_pthread, NULL);
if (ret) {
metal_log(METAL_LOG_ERROR, "Failed to join IRQ thread: %d.\n", ret);
metal_log(METAL_LOG_ERROR, "Failed to join IRQ thread: %d.\n",
ret);
}
close(irq_notify_fd);
metal_mutex_deinit(&irq_lock);
@ -258,8 +267,8 @@ void metal_linux_irq_shutdown()
void metal_linux_irq_register_dev(struct metal_device *dev, int irq)
{
if (irq > MAX_IRQS) {
metal_log(METAL_LOG_ERROR, "Failed to register device to irq %d\n",
irq);
metal_log(METAL_LOG_ERROR,
"Failed to register device to irq %d\n", irq);
return;
}
irqs_devs[irq] = dev;

View File

@ -61,6 +61,7 @@ static inline void __metal_mutex_deinit(metal_mutex_t *mutex)
static inline int __metal_mutex_try_acquire(metal_mutex_t *mutex)
{
int val = 0;
return atomic_compare_exchange_strong(&mutex->v, &val, 1);
}

View File

@ -77,6 +77,7 @@ static int metal_shmem_try_map(struct metal_page_size *ps, int fd, size_t size,
} else {
for (virt = mem, page = 0; page < pages; page++) {
size_t offset = page * ps->page_size;
error = metal_virt2phys(virt + offset, &phys[page]);
if (error < 0)
phys[page] = METAL_BAD_OFFSET;

View File

@ -23,12 +23,12 @@ unsigned long long metal_get_timestamp(void)
r = clock_gettime(CLOCK_MONOTONIC, &tp);
if (r == -1) {
metal_log(METAL_LOG_ERROR,"clock_gettime failed!\n");
metal_log(METAL_LOG_ERROR, "clock_gettime failed!\n");
return t;
} else {
t = tp.tv_sec * (NS_PER_S);
t += tp.tv_nsec;
}
t = tp.tv_sec * (NS_PER_S);
t += tp.tv_nsec;
return t;
}

View File

@ -132,11 +132,12 @@ int metal_mktemp(char *template, int fifo)
if (fifo) {
result = mkfifo(template, mode);
if (result < 0) {
if (errno == EEXIST)
continue;
metal_log(METAL_LOG_ERROR, "mkfifo(%s) failed (%s)\n",
template, strerror(errno));
return -errno;
if (errno == EEXIST)
continue;
metal_log(METAL_LOG_ERROR,
"mkfifo(%s) failed (%s)\n",
template, strerror(errno));
return -errno;
}
}

View File

@ -18,6 +18,7 @@ struct metal_state _metal;
int metal_sys_init(const struct metal_init_params *params)
{
int ret = metal_cntr_irq_init();
if (ret >= 0)
ret = metal_bus_register(&metal_generic_bus);
return ret;

View File

@ -84,7 +84,7 @@ static unsigned long metal_io_phys_to_offset_(struct metal_io_region *io,
return (char *)up_addrenv_pa_to_va(phys) - (char *)io->virt;
}
static metal_phys_addr_t metal_io_phys_start_ = 0;
static metal_phys_addr_t metal_io_phys_start_;
static struct metal_io_region metal_io_region_ = {
.virt = NULL,

View File

@ -45,7 +45,7 @@ static int metal_cntr_irq_handler(int irq, void *context, void *data)
/* context == NULL mean unregister */
irqchain_detach(irq, metal_cntr_irq_handler, data);
sched_kfree(data);
metal_free_memory(data);
return 0;
}
@ -85,6 +85,6 @@ int metal_cntr_irq_init(void)
NULL,
metal_cntr_irq_set_enable,
metal_cntr_irq_attach,
NULL)
NULL);
return metal_irq_register_controller(&metal_cntr_irq);
}

View File

@ -14,7 +14,7 @@
#if (CONFIG_HEAP_MEM_POOL_SIZE <= 0)
void* metal_weak metal_zephyr_allocate_memory(unsigned int size)
void *metal_weak metal_zephyr_allocate_memory(unsigned int size)
{
(void)size;
return NULL;

View File

@ -32,7 +32,6 @@ static inline void __metal_cache_invalidate(void *addr, unsigned int len)
{
metal_unused(addr);
metal_unused(len);
return;
}
#ifdef __cplusplus

View File

@ -44,7 +44,7 @@ int metal_condition_wait(struct metal_condition *cv,
}
metal_generic_default_poll();
metal_irq_restore_enable(flags);
} while(1);
} while (1);
/* Acquire the mutex again. */
metal_mutex_acquire(m);
return 0;

View File

@ -27,10 +27,11 @@ extern "C" {
struct metal_condition {
atomic_uintptr_t mptr; /**< mutex pointer.
The condition variable is attached to
this mutex when it is waiting.
It is also used to check correctness
in case there are multiple waiters. */
* The condition variable is attached to
* this mutex when it is waiting.
* It is also used to check correctness
* in case there are multiple waiters.
*/
atomic_int v; /**< condition variable value. */
};

View File

@ -13,7 +13,7 @@
#include <metal/log.h>
#include <zephyr.h>
static const char *level_strs[] = {
static const char * const level_strs[] = {
"metal: emergency: ",
"metal: alert: ",
"metal: critical: ",

View File

@ -48,25 +48,25 @@ static inline void __metal_mutex_deinit(metal_mutex_t *m)
static inline int __metal_mutex_try_acquire(metal_mutex_t *m)
{
int key = irq_lock(), ret = 1;
int key = irq_lock(), ret = 1;
if (m->count) {
m->count = 0;
ret = 0;
}
if (m->count) {
m->count = 0;
ret = 0;
}
irq_unlock(key);
irq_unlock(key);
return ret;
return ret;
}
static inline int __metal_mutex_is_acquired(metal_mutex_t *m)
{
int key = irq_lock(), ret;
int key = irq_lock(), ret;
ret = m->count;
irq_unlock(key);
irq_unlock(key);
return ret;
}

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup time TIME Interfaces
* @{ */
* @{
*/
#include <stdint.h>
#include <metal/sys.h>

View File

@ -21,7 +21,8 @@ extern "C" {
#endif
/** \defgroup utilities Simple Utilities
* @{ */
* @{
*/
/** Marker for unused function arguments/variables. */
#define metal_unused(x) do { (x) = (x); } while (0)
@ -64,7 +65,7 @@ extern "C" {
/** Compute offset of a field within a structure. */
#define metal_offset_of(structure, member) \
((uintptr_t) &(((structure *) 0)->member))
((uintptr_t)&(((structure *)0)->member))
/** Compute pointer to a structure given a pointer to one of its fields. */
#define metal_container_of(ptr, structure, member) \
@ -104,9 +105,10 @@ metal_bitmap_next_set_bit(unsigned long *bitmap, unsigned int start,
unsigned int max)
{
unsigned int bit;
for (bit = start;
bit < max && !metal_bitmap_is_bit_set(bitmap, bit);
bit ++)
bit++)
;
return bit;
}
@ -121,9 +123,10 @@ metal_bitmap_next_clear_bit(unsigned long *bitmap, unsigned int start,
unsigned int max)
{
unsigned int bit;
for (bit = start;
bit < max && !metal_bitmap_is_bit_clear(bitmap, bit);
bit ++)
bit++)
;
return bit;
}
@ -139,7 +142,7 @@ static inline unsigned long metal_log2(unsigned long in)
metal_assert((in & (in - 1)) == 0);
for (result = 0; (1UL << result) < in; result ++)
for (result = 0; (1UL << result) < in; result++)
;
return result;
}

View File

@ -13,15 +13,15 @@ int metal_ver_major(void)
int metal_ver_minor(void)
{
return METAL_VER_MINOR;
return METAL_VER_MINOR;
}
int metal_ver_patch(void)
{
return METAL_VER_PATCH;
return METAL_VER_PATCH;
}
const char *metal_ver(void)
{
return METAL_VER;
return METAL_VER;
}

View File

@ -17,7 +17,8 @@ extern "C" {
#endif
/** \defgroup versions Library Version Interfaces
* @{ */
* @{
*/
/**
* @brief Library major version number.