cost-A tiny boost library in C++11​

cost Is anCombining performance with ease of useThe cross-platform C++ basic library of , originally named co, was later changed to cocoyaxi. The former is too short and the latter is too long. The middle way is adopted, and it is changed to cost.

Why is it called cost?A friend used to call it a small boost The library, which is a little smaller than boost, is called cost. How small is it?exist The static library compiled on linux and mac is only about 1M in size. Although small, it provides enough powerful functions:

  • Command line parameter and configuration file parsing library (flag)
  • High-performance log library (log)
  • Unit testing framework (unitest)
  • go-style coroutines
  • Coroutine-based network programming framework
  • Efficient JSON library
  • JSON based RPC framework
  • Metaphysics-oriented programming
  • Atomic operation (atomic)
  • Random number generator (random)
  • Efficient character stream (fastream)
  • efficient string (fastring)
  • String manipulation (str)
  • time library (time)
  • Thread library (thread)
  • Timed Task Scheduler
  • High-performance memory allocator
  • LruMap
  • hash library
  • path library
  • File system operations (fs)
  • System operation (os)

The version released this time has jumped directly from v2.0.3 to v3.0.0. The span is very large. It is inperformance,Ease of use,stabilityand other aspects have been comprehensively improved.

performance optimization

memory allocator

A new memory allocator (co/malloc) is implemented in v3.0, it is not a general memory allocator, in free and realloc When you need to add the size information of the original memory block, it may be a little inconvenient to use, but it can simplify the design and improve the performance of the memory allocator.

Common memory allocators, such as ptmalloc, jemalloc, tcmalloc and mimalloc, etc., after the small memory is released, it is likely to be cached by them and not returned to the operating system, which mayCauses the phenomenon of suspected memory leak that the memory usage does not decrease after releasing a large amount of small memory. co/malloc has optimized this. While taking into account the performance, it will return as much of the released memory to the operating system as possible, which is conducive to reducing the memory footprint of the program, and has achieved good results in the actual measurement.

co/malloc also provides co::stl_allocator, which can replace the default allocator std::allocator in STL containers. co/stl.h Some commonly used allocator-replaced containers are provided in , which have performance advantages compared to the std version.

co/malloc has become the default memory allocator used internally by cost. Fastring, fastream, Json, etc. are all based on co/malloc. co/test A simple test code is provided in , which can be compiled and run by executing the following commands:

xmake b mem
xmake r mem -t 4 -s

-t specify the number of threads,-s Represents a comparison with the system’s memory allocator. Here are the test results on different systems (4 threads):

win/AMD 3.2G7.326.8386.05105.0611.7/15.3
mac/i7 2.4G9.919.8655.6460.205.6/6.1
linux/i7 2.2G10.807.511070.521.1799.1/2.8

In the above table, the time unit is nanoseconds (ns), linux is an ubuntu system running in Windows WSL, and speedup is the performance improvement multiple of co/malloc relative to the system memory allocator.can be seen co::alloc is nearly 99 times faster than ::malloc on Linuxan important reason for this is that ptmalloc has a large lock competition overhead in a multi-threaded environment, and co/malloc is designed to avoid the use of locks as much as possible, and the allocation and release of small blocks of memory do not require locks, even in cross-threading. It doesn’t even use a spinlock when releasing it.

atomic operation

In v3.0, atomic operations support memory order to meet the needs of some high-performance application scenarios. co/atomic.h There are 6 memory orders defined in , which are consistent with the C++11 standard:

mo_relaxed    mo_consume    mo_acquire 
mo_release    mo_acq_rel    mo_seq_cst

int i = 0;
uint64 u = 0;
atomic_inc(&i, mo_relaxed);
atomic_load(&i, mo_relaxed);
atomic_add(&u, 3); // mo_seq_cst
atomic_add(&u, 7, mo_acquire);

Ease of use improved

Simplified initialization process

In v2.0.3, the main function might have to be written like this:

#include "co/flag.h"
#include "co/log.h"
#include "co/co.h"

int main(int argc, char** argv) {
    flag::init(argc, argv);
    // do something here...
    return 0;

To improve user experience, removed in v3.0 log::init(),co::init(),co::exit() Waiting for the API, now the main function can be written as follows:

#include "co/flag.h"

int main(int argc, char** argv) {
    flag::init(argc, argv);
    // do something here...
    return 0;

In v3.0, the only initialization interface that the entire cost library needs to call is flag::init()which is used to parse command line parameters and configuration files.


co/flag It is an easy-to-use command-line parameter and configuration file parsing library, which is used by components such as logs, coroutines, and RPC frameworks in cost to define configuration items.

In v3.0, some details have been improved, such as –help Only the flags defined by the user will be displayed, and the flags defined by the cost will not be displayed.

v3.0 also adds the flag alias function. When defining a flag, you can specify any number of aliases:

DEF_bool(debug, false, "");         // no alias
DEF_bool(debug, false, "", d);      // d is an alias of debug
DEF_bool(debug, false, "", d, dbg); // 2 aliases


In v2.0.3, co/log provides a log function similar to google glog, which divides logs into debug, info, warning, error, fatal and other levels. In v3.0,Add TLOGthat is, logs classified by topic, and logs of different topics are output to different files.

In v3.0, co/log does not require manual initialization by the user, including co/log.h Ready to use, no special settings are required.

#include "co/log.h"

int main(int argc, char** argv) {
    flag::init(argc, argv);

    DLOG << "hello " << 23; // debug
    LOG << "hello " << 23;  // info
    WLOG << "hello " << 23; // warning
    ELOG << "hello " << 23; // error
    FLOG << "hello " << 23; // fatal
    TLOG("rpc") << "hello " << 23;
    TLOG("xxx") << "hello " << 23;

    return 0;


In v2.0.3, JSON objects are built on contiguous memory, which can reduce memory allocation and improve performance, but the ease of use will be reduced. With co/malloc in v3.0, JSON adopts a more flexible implementation method. While maintaining high performance, the usability has also been qualitatively improved.

Performance comparison of co/json and rapidjson

osco/json stringifyco/json parserapidjson stringifyrapidjson parsespeedup

The table above will be twitter.json The minimized test result, the time-consuming unit is microseconds (us), and the speedup is the performance improvement multiple of co/json relative to rapidjson.

Implemented in v3.0 Json class, which usesFluent interface designit is more convenient to use.

// {"a":23,"b":false,"s":"123","v":[1,2,3],"o":{"xx":0}}
Json x = {
    { "a", 23 },
    { "b", false },
    { "s", "123" },
    { "v", {1,2,3} },
    { "o", {
        {"xx", 0}

// equal to x
Json y = Json()
    .add_member("a", 23)
    .add_member("b", false)
    .add_member("s", "123")
    .add_member("v", Json().push_back(1).push_back(2).push_back(3))
    .add_member("o", Json().add_member("xx", 0));

x.get("a").as_int();       // 23
x.get("s").as_string();    // "123"
x.get("s").as_int();       // 123, string -> int
x.get("v", 0).as_int();    // 1
x.get("v", 2).as_int();    // 3
x.get("o", "xx").as_int(); // 0

x["a"] == 23;          // true
x["s"] == "123";       // true
x.get("o", "xx") != 0; // false


In v3.0,The RPC framework adds support for the HTTP protocol and integrates the RPC service with the HTTP service.

#include "co/all.h"

int main(int argc, char** argv) {
    flag::init(argc, argv);

        .add_service(new xx::HelloWorldImpl)
        .start("", 7788, "/xx");

    for (;;) sleep::sec(80000);
    return 0;

rpc::Server You can add multiple services and call start() method to start the RPC service, the third parameter of the method specifies the URL of the HTTP service.Users can pass rpc::Client To call its services, you can also call its services through HTTP, such as:

curl --request POST --data '{"api":"ping"}'
curl --request POST --data '{"api":"HelloWorld.hello"}'

The body part of an RPC request or HTTP request is a JSON string, which needs to be used with api The field indicates the method to call, and the value of this field is generally service.method It is a special service built into the cost RPC framework that can be used to send heartbeats or tests.

Improved stability

coost provides a set of easy-to-use unit testing framework, and in unitest A large number of unit test codes are provided in the directory, covering almost all components inside the cost, which provides an important guarantee for the stability of the cost.

A lot of unit test code has been added in v3.0, which further improves the unit test coverage.In addition, for some functions that are inconvenient to write unit tests, cost is in test A large amount of test code is provided separately in the directory.

#coost #v300 #mini #boost #library #released #News Fast Delivery

Leave a Comment

Your email address will not be published. Required fields are marked *