Apollo Cyber Study. (cyber/class_loader)

// Study: 是我的筆記

下邊是我把CYBER_REGISTER_COMPONENT真正會展開的東西放在一起

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
template <typename Derived, typename Base>
void RegisterClass(const std::string& class_name,
const std::string& base_class_name) {
// Study: GetCurLoadingLibraryName() is a singleton
// It value only changed in LoadLibrary in class_loader_utility
AINFO << "registerclass:" << class_name << "," << base_class_name << ","
<< GetCurLoadingLibraryName();

// Study: The only ClassFactory required the Derived Class (The modules real class)
// have a constructor with no argument
utility::AbstractClassFactory<Base>* new_class_factrory_obj =
new utility::ClassFactory<Derived, Base>(class_name, base_class_name);
new_class_factrory_obj->AddOwnedClassLoader(GetCurActiveClassLoader());
new_class_factrory_obj->SetRelativeLibraryPath(GetCurLoadingLibraryName());

GetClassFactoryMapMapMutex().lock();
ClassClassFactoryMap& factory_map =
GetClassFactoryMapByBaseClass(typeid(Base).name());
factory_map[class_name] = new_class_factrory_obj;
GetClassFactoryMapMapMutex().unlock();
}

// Study: Using a proxy class constructor and it static instantiate
// to call RegisterClass once the shared object that have called CYBER_REGISTER_COMPONENT
// have been loaded
#define CLASS_LOADER_REGISTER_CLASS_INTERNAL(Derived, Base, UniqueID) \
namespace { \
struct ProxyType##UniqueID { \
ProxyType##UniqueID() { \
apollo::cyber::class_loader::utility::RegisterClass<Derived, Base>( \
#Derived, #Base); \
} \
}; \
static ProxyType##UniqueID g_register_class_##UniqueID; \
}

#define CLASS_LOADER_REGISTER_CLASS_INTERNAL_1(Derived, Base, UniqueID) \
CLASS_LOADER_REGISTER_CLASS_INTERNAL(Derived, Base, UniqueID)

// Study: Assign a unique id to each registered component, avoid name collision
// register class macro
#define CLASS_LOADER_REGISTER_CLASS(Derived, Base) \
CLASS_LOADER_REGISTER_CLASS_INTERNAL_1(Derived, Base, __COUNTER__)

// Study: This is the macro that used in modules
// All modules is subclass of apollo::cyber::ComponentBase
#define CYBER_REGISTER_COMPONENT(name) \
CLASS_LOADER_REGISTER_CLASS(name, apollo::cyber::ComponentBase)

cyber/class_loader/utility/class_loader_utility

雖然名字叫utility, 但真正的class loading 過程都在這
cc文件中的實現沒有特別,就不說了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_CLASS_LOADER_CLASS_LOADER_UTILITY_H_
#define CYBER_CLASS_LOADER_CLASS_LOADER_UTILITY_H_

#include <Poco/SharedLibrary.h>
#include <cassert>
#include <cstdio>
#include <map>
#include <memory>
#include <mutex>
#include <string>
#include <typeinfo>
#include <utility>
#include <vector>

#include "cyber/class_loader/utility/class_factory.h"
#include "cyber/common/log.h"

/**
* class register implement
*/
namespace apollo {
namespace cyber {
namespace class_loader {

class ClassLoader;

namespace utility {

// Study: 用了Poco庫去加載shared library
// https://vovkos.github.io/doxyrest-showcase/poco/sphinxdoc/class_Poco_SharedLibrary.html#details-doxid-class-poco-1-1-shared-library
using PocoLibraryPtr = std::shared_ptr<Poco::SharedLibrary>;
// Study: 可怕的命名, 雖然合理, 但可怕
using ClassClassFactoryMap =
std::map<std::string, utility::AbstractClassFactoryBase*>;
using BaseToClassFactoryMapMap = std::map<std::string, ClassClassFactoryMap>;
using LibpathPocolibVector =
std::vector<std::pair<std::string, PocoLibraryPtr>>;
using ClassFactoryVector = std::vector<AbstractClassFactoryBase*>;

// Study:  Singletion and it getter, mutex, etc
// This act as a global states
BaseToClassFactoryMapMap& GetClassFactoryMapMap();
std::recursive_mutex& GetClassFactoryMapMapMutex();
LibpathPocolibVector& GetLibPathPocoShareLibVector();
std::recursive_mutex& GetLibPathPocoShareLibMutex();
ClassClassFactoryMap& GetClassFactoryMapByBaseClass(
const std::string& typeid_base_class_name);
std::string GetCurLoadingLibraryName();
void SetCurLoadingLibraryName(const std::string& library_name);
ClassLoader* GetCurActiveClassLoader();
void SetCurActiveClassLoader(ClassLoader* loader);

// Study: When call LoadLibrary, need provide a ClassLoader.
// Avoid load same library multiple time for different class loader
bool IsLibraryLoaded(const std::string& library_path, ClassLoader* loader);
bool IsLibraryLoadedByAnybody(const std::string& library_path);
// Study: The Core function
bool LoadLibrary(const std::string& library_path, ClassLoader* loader);
void UnloadLibrary(const std::string& library_path, ClassLoader* loader);

template <typename Derived, typename Base>
void RegisterClass(const std::string& class_name,
const std::string& base_class_name);
template <typename Base>
Base* CreateClassObj(const std::string& class_name, ClassLoader* loader);
template <typename Base>
std::vector<std::string> GetValidClassNames(ClassLoader* loader);

// Study: Put a class factory to global map
template <typename Derived, typename Base>
void RegisterClass(const std::string& class_name,
const std::string& base_class_name) {
AINFO << "registerclass:" << class_name << "," << base_class_name << ","
<< GetCurLoadingLibraryName();

utility::AbstractClassFactory<Base>* new_class_factrory_obj =
new utility::ClassFactory<Derived, Base>(class_name, base_class_name);
new_class_factrory_obj->AddOwnedClassLoader(GetCurActiveClassLoader());
new_class_factrory_obj->SetRelativeLibraryPath(GetCurLoadingLibraryName());

GetClassFactoryMapMapMutex().lock();
ClassClassFactoryMap& factory_map =
GetClassFactoryMapByBaseClass(typeid(Base).name());
factory_map[class_name] = new_class_factrory_obj;
GetClassFactoryMapMapMutex().unlock();
}

// Study: Using the loaded class factory to create a object with no argument
// using the class loader
template <typename Base>
Base* CreateClassObj(const std::string& class_name, ClassLoader* loader) {
GetClassFactoryMapMapMutex().lock();
ClassClassFactoryMap& factoryMap =
GetClassFactoryMapByBaseClass(typeid(Base).name());
AbstractClassFactory<Base>* factory = nullptr;
if (factoryMap.find(class_name) != factoryMap.end()) {
factory = dynamic_cast<utility::AbstractClassFactory<Base>*>(
factoryMap[class_name]);
}
GetClassFactoryMapMapMutex().unlock();

Base* classobj = nullptr;
if (factory && factory->IsOwnedBy(loader)) {
classobj = factory->CreateObj();
}

return classobj;
}

// Study: What class can the class loader load
template <typename Base>
std::vector<std::string> GetValidClassNames(ClassLoader* loader) {
std::lock_guard<std::recursive_mutex> lck(GetClassFactoryMapMapMutex());

ClassClassFactoryMap& factoryMap =
GetClassFactoryMapByBaseClass(typeid(Base).name());
std::vector<std::string> classes;
for (auto& class_factory : factoryMap) {
AbstractClassFactoryBase* factory = class_factory.second;
if (factory && factory->IsOwnedBy(loader)) {
classes.emplace_back(class_factory.first);
}
}

return classes;
}

} // End namespace utility
} // End namespace class_loader
} // namespace cyber
} // namespace apollo
#endif // CYBER_CLASS_LOADER_CLASS_LOADER_UTILITY_H_

cyber/class_loader/class_loader

The core function is already implemented in class loader utility.
The class loader is used to provided a higher abstract level of feature
Moreover, it provide the reference counting to object and library.
Dynamically determined the real time to unload a library

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef CYBER_CLASS_LOADER_CLASS_LOADER_H_
#define CYBER_CLASS_LOADER_CLASS_LOADER_H_

#include <algorithm>
#include <memory>
#include <mutex>
#include <string>
#include <vector>

#include "cyber/class_loader/class_loader_register_macro.h"

namespace apollo {
namespace cyber {
namespace class_loader {

/**
* for library load,createclass object
*/
class ClassLoader {
public:
explicit ClassLoader(const std::string& library_path);
// Study: Although it is virtual, but there are no derived class, so ignore the virtual
virtual ~ClassLoader();

bool IsLibraryLoaded();
// Study: Core function
bool LoadLibrary();
int UnloadLibrary();

const std::string GetLibraryPath() const;
template <typename Base>
std::vector<std::string> GetValidClassNames();
template <typename Base>
std::shared_ptr<Base> CreateClassObj(const std::string& class_name);
template <typename Base>
bool IsClassValid(const std::string& class_name);

private:
template <typename Base>
void OnClassObjDeleter(Base* obj);

private:
std::string library_path_;
int loadlib_ref_count_;
std::mutex loadlib_ref_count_mutex_;
int classobj_ref_count_;
std::mutex classobj_ref_count_mutex_;
};

template <typename Base>
std::vector<std::string> ClassLoader::GetValidClassNames() {
return (utility::GetValidClassNames<Base>(this));
}

template <typename Base>
bool ClassLoader::IsClassValid(const std::string& class_name) {
std::vector<std::string> valid_classes = GetValidClassNames<Base>();
return (std::find(valid_classes.begin(), valid_classes.end(), class_name) !=
valid_classes.end());
}

// Study: Create class object, must return a shared pointer or
// a wrapper to the real object, otherwise it cannot do the
// reference counting
template <typename Base>
std::shared_ptr<Base> ClassLoader::CreateClassObj(
const std::string& class_name) {
if (!IsLibraryLoaded()) {
LoadLibrary();
}

Base* class_object = utility::CreateClassObj<Base>(class_name, this);
if (nullptr == class_object) {
AWARN << "CreateClassObj failed, ensure class has been registered. "
<< "classname: " << class_name << ",lib: " << GetLibraryPath();
return std::shared_ptr<Base>();
}

std::lock_guard<std::mutex> lck(classobj_ref_count_mutex_);
classobj_ref_count_ = classobj_ref_count_ + 1;
std::shared_ptr<Base> classObjSharePtr(
class_object, std::bind(&ClassLoader::OnClassObjDeleter<Base>, this,
std::placeholders::_1));
return classObjSharePtr;
}

template <typename Base>
void ClassLoader::OnClassObjDeleter(Base* obj) {
if (nullptr == obj) {
return;
}

std::lock_guard<std::mutex> lck(classobj_ref_count_mutex_);
delete obj;
classobj_ref_count_ = classobj_ref_count_ - 1;
}

// Study: Auto load on create
ClassLoader::ClassLoader(const std::string& library_path)
: library_path_(library_path),
loadlib_ref_count_(0),
classobj_ref_count_(0) {
LoadLibrary();
}

ClassLoader::~ClassLoader() { UnloadLibrary(); }

bool ClassLoader::IsLibraryLoaded() {
return utility::IsLibraryLoaded(library_path_, this);
}

bool ClassLoader::LoadLibrary() {
std::lock_guard<std::mutex> lck(loadlib_ref_count_mutex_);
loadlib_ref_count_ = loadlib_ref_count_ + 1;
AINFO << "Begin LoadLibrary: " << library_path_;
return utility::LoadLibrary(library_path_, this);
}

// Study: Only unload library after not more reference
int ClassLoader::UnloadLibrary() {
std::lock_guard<std::mutex> lckLib(loadlib_ref_count_mutex_);
std::lock_guard<std::mutex> lckObj(classobj_ref_count_mutex_);

if (classobj_ref_count_ > 0) {
AINFO << "There are still classobjs have not been deleted, "
"classobj_ref_count_: "
<< classobj_ref_count_;
} else {
loadlib_ref_count_ = loadlib_ref_count_ - 1;
if (loadlib_ref_count_ == 0) {
utility::UnloadLibrary(library_path_, this);
} else {
if (loadlib_ref_count_ < 0) {
loadlib_ref_count_ = 0;
}
}
}
return loadlib_ref_count_;
}

const std::string ClassLoader::GetLibraryPath() const { return library_path_; }

} // namespace class_loader
} // namespace cyber
} // namespace apollo
#endif // CYBER_CLASS_LOADER_CLASS_LOADER_H_

cyber/class_loader/class_loader_manager

對class loader的再高一層次抽象, 如果直接用class loader
那每要加一個library就要多一個class loader.
在要unload library時就會出現不知道用那個class loader的問題,
所以就要一個manager去管理了.

Apollo Cyber Study. (cyber/event)

比起cyber/base中的各種實現。 cyber/event就好像沒有甚麼好說了

不過也是要記錄一下

基本兩個class - perf_event, perf_event_cache

顧明思義,這東西就是用來記錄各個event的

perf_event: event的定義及序列化格式。包括了SCHED_EVENT, TRANS_EVENT, TRY_FETCH_EVENT,
第三個沒有真的用上。

  • SCHED_EVENT: 由scheduler產生的event, 記錄被schedule的task的id, state, proc, etc.
  • TRANS_EVENT: transport, 在cyber/transport中生出transport的msg id, seq, etc

perf_event_cache: 這就是管理perf event的地方, cache提供一個singleton給其他模塊去add event. 而cache內部先把event都放到BoundedQueue. 讓io thread不停去累積event message, 當event夠一定程度就做一次寫。 可以減少大量小型io的浪費.

Apollo Cyber Study. (cyber/base 3)

終於完成cyber/base的部份了

// Study: 是我的筆記

cyber/base/reentrant_rw_lock

先定義一下Reentrant rw lock
大體上跟等價於一般的RW lock
主要不同就是
 它保護了當前持有鎖的線程,想再加鎖時不會被blocking
減少上下文切換的成本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_REENTRANT_RW_LOCK_H_
#define CYBER_BASE_REENTRANT_RW_LOCK_H_

#include <stdint.h>
#include <unistd.h>
#include <atomic>
#include <condition_variable>
#include <cstdlib>
#include <iostream>
#include <mutex>
#include <thread>

#include "cyber/base/rw_lock_guard.h"

namespace apollo {
namespace cyber {
namespace base {

// Study: 要做reentrant 就一定要先知道自己thread id
static const std::thread::id NULL_THREAD_ID = std::thread::id();
class ReentrantRWLock {
friend class ReadLockGuard<ReentrantRWLock>;
friend class WriteLockGuard<ReentrantRWLock>;

public:
static const int32_t RW_LOCK_FREE = 0;
// Study: lock num = WRITE_EXCLUSIVE mean can read
static const int32_t WRITE_EXCLUSIVE = -1;
static const uint32_t MAX_RETRY_TIMES = 5;
static const std::thread::id null_thread;
ReentrantRWLock() {}
explicit ReentrantRWLock(bool write_first) : write_first_(write_first) {}

private:
// all these function only can used by ReadLockGuard/WriteLockGuard;
void ReadLock();
void WriteLock();

void ReadUnlock();
void WriteUnlock();

ReentrantRWLock(const ReentrantRWLock&) = delete;
ReentrantRWLock& operator=(const ReentrantRWLock&) = delete;
// Study: Check who getting the write lock
std::thread::id write_thread_id_ = {NULL_THREAD_ID};
std::atomic<uint32_t> write_lock_wait_num_ = {0};
// Study: Allow multiple repeated lock, so need lock num to count the lock
std::atomic<int32_t> lock_num_ = {0};
bool write_first_ = true;
};

inline void ReentrantRWLock::ReadLock() {
// Study: Reentrant Check
if (write_thread_id_ == std::this_thread::get_id()) {
return;
}

uint32_t retry_times = 0;
int32_t lock_num = lock_num_.load(std::memory_order_acquire);
if (write_first_) {
do {
while (lock_num < RW_LOCK_FREE ||
write_lock_wait_num_.load(std::memory_order_acquire) > 0) {
if (++retry_times == MAX_RETRY_TIMES) {
// saving cpu
std::this_thread::yield();
retry_times = 0;
}
lock_num = lock_num_.load(std::memory_order_acquire);
}
} while (!lock_num_.compare_exchange_weak(lock_num, lock_num + 1,
std::memory_order_acq_rel,
std::memory_order_relaxed));
} else {
do {
while (lock_num < RW_LOCK_FREE) {
if (++retry_times == MAX_RETRY_TIMES) {
// saving cpu
std::this_thread::yield();
retry_times = 0;
}
lock_num = lock_num_.load(std::memory_order_acquire);
}
} while (!lock_num_.compare_exchange_weak(lock_num, lock_num + 1,
std::memory_order_acq_rel,
std::memory_order_relaxed));
}
}

inline void ReentrantRWLock::WriteLock() {
auto this_thread_id = std::this_thread::get_id();
// Study: Reentrant Check
if (write_thread_id_ == this_thread_id) {
lock_num_.fetch_sub(1);
return;
}
int32_t rw_lock_free = RW_LOCK_FREE;
uint32_t retry_times = 0;
write_lock_wait_num_.fetch_add(1);
while (!lock_num_.compare_exchange_weak(rw_lock_free, WRITE_EXCLUSIVE,
std::memory_order_acq_rel,
std::memory_order_relaxed)) {
// rw_lock_free will change after CAS fail, so init agin
rw_lock_free = RW_LOCK_FREE;
if (++retry_times == MAX_RETRY_TIMES) {
// saving cpu
std::this_thread::yield();
retry_times = 0;
}
}
write_thread_id_ = this_thread_id;
write_lock_wait_num_.fetch_sub(1);
}

// Study:  比非reentrant version 多了個thread check
inline void ReentrantRWLock::ReadUnlock() {
if (write_thread_id_ == std::this_thread::get_id()) {
return;
}
lock_num_.fetch_sub(1);
}

// Study: lock num的更新要同時維護write_thread_id_
inline void ReentrantRWLock::WriteUnlock() {
if (lock_num_.fetch_add(1) == WRITE_EXCLUSIVE) {
write_thread_id_ = NULL_THREAD_ID;
}
}

} // namespace base
} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_REENTRANT_RW_LOCK_H_

cyber/base/signal

signal and slot 的 concept可看qt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_SIGNAL_H_
#define CYBER_BASE_SIGNAL_H_

#include <algorithm>
#include <functional>
#include <list>
#include <memory>
#include <mutex>

namespace apollo {
namespace cyber {
namespace base {

// Study: Forward Declaration
template <typename... Args>
class Slot;

template <typename... Args>
class Connection;

template <typename... Args>
class Signal {
public:
using Callback = std::function<void(Args...)>;
using SlotPtr = std::shared_ptr<Slot<Args...>>;
using SlotList = std::list<SlotPtr>;
using ConnectionType = Connection<Args...>;

Signal() {}
virtual ~Signal() { DisconnectAllSlots(); }

// Study: When the signal is activate
void operator()(Args... args) {
// Study: limit the lock scope
SlotList local;
{
std::lock_guard<std::mutex> lock(mutex_);
for (auto& slot : slots_) {
local.emplace_back(slot);
}
}

if (!local.empty()) {
for (auto& slot : local) {
(*slot)(args...);
}
}

ClearDisconnectedSlots();
}

// Study: This is the first step in using a signal
// Connect signal to slot
ConnectionType Connect(const Callback& cb) {
auto slot = std::make_shared<Slot<Args...>>(cb);
{
std::lock_guard<std::mutex> lock(mutex_);
slots_.emplace_back(slot);
}

return ConnectionType(slot, this);
}

bool Disconnect(const ConnectionType& conn) {
bool find = false;
{
std::lock_guard<std::mutex> lock(mutex_);
for (auto& slot : slots_) {
if (conn.HasSlot(slot)) {
find = true;
slot->Disconnect();
}
}
}

if (find) {
ClearDisconnectedSlots();
}
return find;
}

void DisconnectAllSlots() {
std::lock_guard<std::mutex> lock(mutex_);
for (auto& slot : slots_) {
slot->Disconnect();
}
slots_.clear();
}

private:
Signal(const Signal&) = delete;
Signal& operator=(const Signal&) = delete;

void ClearDisconnectedSlots() {
std::lock_guard<std::mutex> lock(mutex_);
slots_.erase(
std::remove_if(slots_.begin(), slots_.end(),
[](const SlotPtr& slot) { return !slot->connected(); }),
slots_.end());
}

SlotList slots_;
std::mutex mutex_;
};

// Study: This represent the connection status between signal and slot
// Help maintain the real time connectivity
template <typename... Args>
class Connection {
public:
using SlotPtr = std::shared_ptr<Slot<Args...>>;
using SignalPtr = Signal<Args...>*;

Connection() : slot_(nullptr), signal_(nullptr) {}
Connection(const SlotPtr& slot, const SignalPtr& signal)
: slot_(slot), signal_(signal) {}
virtual ~Connection() {
slot_ = nullptr;
signal_ = nullptr;
}

Connection& operator=(const Connection& another) {
if (this != &another) {
this->slot_ = another.slot_;
this->signal_ = another.signal_;
}
return *this;
}

bool HasSlot(const SlotPtr& slot) const {
if (slot != nullptr && slot_ != nullptr) {
return slot_.get() == slot.get();
}
return false;
}

bool IsConnected() const {
if (slot_) {
return slot_->connected();
}
return false;
}

bool Disconnect() {
if (signal_ && slot_) {
return signal_->Disconnect(*this);
}
return false;
}

private:
SlotPtr slot_;
SignalPtr signal_;
};

template <typename... Args>
class Slot {
public:
using Callback = std::function<void(Args...)>;
Slot(const Slot& another)
: cb_(another.cb_), connected_(another.connected_) {}
explicit Slot(const Callback& cb, bool connected = true)
: cb_(cb), connected_(connected) {}
virtual ~Slot() {}

// Study: When the slot have receive signal, do cb
void operator()(Args... args) {
if (connected_ && cb_) {
cb_(args...);
}
}

void Disconnect() { connected_ = false; }
bool connected() const { return connected_; }

private:
Callback cb_;
bool connected_ = true;
};

} // namespace base
} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_SIGNAL_H_

cyber/base/thread_pool

建在bounded_queue上的一個thread pool,
Task 是一個function 跟對應的argument

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_THREAD_POOL_H_
#define CYBER_BASE_THREAD_POOL_H_

#include <atomic>
#include <functional>
#include <future>
#include <memory>
#include <queue>
#include <stdexcept>
#include <thread>
#include <utility>
#include <vector>

#include "cyber/base/bounded_queue.h"

namespace apollo {
namespace cyber {
namespace base {

class ThreadPool {
public:
explicit ThreadPool(std::size_t thread_num, std::size_t max_task_num = 1000);

// Study: F is the function, Args is arguments,
// it will return the value that wrapped by future
template <typename F, typename... Args>
auto Enqueue(F&& f, Args&&... args)
-> std::future<typename std::result_of<F(Args...)>::type>;

~ThreadPool();

private:
std::vector<std::thread> workers_;
BoundedQueue<std::function<void()>> task_queue_;
std::atomic_bool stop_;
};

inline ThreadPool::ThreadPool(std::size_t threads, std::size_t max_task_num)
: stop_(false) {
if (!task_queue_.Init(max_task_num, new BlockWaitStrategy())) {
throw std::runtime_error("Task queue init failed.");
}
// Study: Thread pool of course have thread worker
for (size_t i = 0; i < threads; ++i) {
workers_.emplace_back([this] {
while (!stop_) {
std::function<void()> task;
if (task_queue_.WaitDequeue(&task)) {
task();
}
}
});
}
}

// before using the return value, you should check value.valid()
template <typename F, typename... Args>
auto ThreadPool::Enqueue(F&& f, Args&&... args)
-> std::future<typename std::result_of<F(Args...)>::type> {
using return_type = typename std::result_of<F(Args...)>::type;

auto task = std::make_shared<std::packaged_task<return_type()>>(
std::bind(std::forward<F>(f), std::forward<Args>(args)...));

std::future<return_type> res = task->get_future();

// don't allow enqueueing after stopping the pool
if (stop_) {
return std::future<return_type>();
}
task_queue_.Enqueue([task]() { (*task)(); });
return res;
};

// the destructor joins all threads
inline ThreadPool::~ThreadPool() {
if (stop_.exchange(true)) {
return;
}
task_queue_.BreakAllWait();
for (std::thread& worker : workers_) {
worker.join();
}
}

} // namespace base
} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_THREAD_POOL_H_

cyber/base/thread_safe_queue

就一個queue加上了一個mutex.
而它為了也提供wait queue, 所以也加了condition_variable

cyber/base/unbounded_queue

bounded_queue的 unbounded版。
不過要留意兩者實現完全不一樣, 而且unbounded_queue為了thread safe
它是用linked_list, 而不是dynamic array.
unbounded_queue理論上性能會比bounded_queue低
而且它都用compare_exchange_strong, 應該是所設計上就不是給大量並發的場景的

Apollo Cyber Study (cyber/base 2)

繼續cyber/base
// Study: 是我的筆記

cyber/base/rw_lock_guard

Provide two wrapper ReadLockGuard, WriteLockGuard for Read-Write-Lock. Using RAII, lock at constructor and unlock at destructor.

cyber/base/atomic_rw_lock

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_ATOMIC_RW_LOCK_H_
#define CYBER_BASE_ATOMIC_RW_LOCK_H_

#include <stdint.h>
#include <unistd.h>
#include <atomic>
#include <condition_variable>
#include <cstdlib>
#include <iostream>
#include <mutex>
#include <thread>

#include "cyber/base/rw_lock_guard.h"

namespace apollo {
namespace cyber {
namespace base {

class AtomicRWLock {
// Study: Since only allow using this lock with the lock guard,
// so the lock and unlock function is placed on private
// then the lockguard must define as friend here.
friend class ReadLockGuard<AtomicRWLock>;
friend class WriteLockGuard<AtomicRWLock>;

public:
static const int32_t RW_LOCK_FREE = 0;
static const int32_t WRITE_EXCLUSIVE = -1;
static const uint32_t MAX_RETRY_TIMES = 5;
AtomicRWLock() {}
explicit AtomicRWLock(bool write_first) : write_first_(write_first) {}

private:
// all these function only can used by ReadLockGuard/WriteLockGuard;
void ReadLock();
void WriteLock();

void ReadUnlock();
void WriteUnlock();

AtomicRWLock(const AtomicRWLock&) = delete;
AtomicRWLock& operator=(const AtomicRWLock&) = delete;
std::atomic<uint32_t> write_lock_wait_num_ = {0};
std::atomic<int32_t> lock_num_ = {0};
bool write_first_ = true;
};

// Study: First wait all write lock release using looping
// (will reschedule this thread if still not release after N try)
// If in write frist mode, need also wait waiting write lock
inline void AtomicRWLock::ReadLock() {
uint32_t retry_times = 0;
int32_t lock_num = lock_num_.load();
if (write_first_) {
do {
while (lock_num < RW_LOCK_FREE || write_lock_wait_num_.load() > 0) {
if (++retry_times == MAX_RETRY_TIMES) {
// saving cpu
std::this_thread::yield();
retry_times = 0;
}
lock_num = lock_num_.load();
}
} while (!lock_num_.compare_exchange_weak(lock_num, lock_num + 1,
std::memory_order_acq_rel,
std::memory_order_relaxed));
} else {
do {
while (lock_num < RW_LOCK_FREE) {
if (++retry_times == MAX_RETRY_TIMES) {
// saving cpu
std::this_thread::yield();
retry_times = 0;
}
lock_num = lock_num_.load();
}
} while (!lock_num_.compare_exchange_weak(lock_num, lock_num + 1,
std::memory_order_acq_rel,
std::memory_order_relaxed));
}
}

// Study: Don't think too much, just lock
inline void AtomicRWLock::WriteLock() {
int32_t rw_lock_free = RW_LOCK_FREE;
uint32_t retry_times = 0;
write_lock_wait_num_.fetch_add(1);
while (!lock_num_.compare_exchange_weak(rw_lock_free, WRITE_EXCLUSIVE,
std::memory_order_acq_rel,
std::memory_order_relaxed)) {
// rw_lock_free will change after CAS fail, so init agin
rw_lock_free = RW_LOCK_FREE;
if (++retry_times == MAX_RETRY_TIMES) {
// saving cpu
std::this_thread::yield();
retry_times = 0;
}
}
write_lock_wait_num_.fetch_sub(1);
}

// Study: Read lock is +, unlock is -
inline void AtomicRWLock::ReadUnlock() { lock_num_.fetch_sub(1); }

// Study: Write lock is -, unlock is +
inline void AtomicRWLock::WriteUnlock() { lock_num_.fetch_add(1); }

} // namespace base
} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_ATOMIC_RW_LOCK_H_

cyber/base/bounded_queue

A atomic fixed-size queue imlpemented using circular array and atomic variable.
The atomic imlpementation is similiar to the atomic_fifo so not describe it.
The special feature of this class is its wait_strategy_. It allow WaitQueue.
It is similiar to Dequeue, but it will wait if the queue is empty.

cyber/base/concurrent_object_pool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_CONCURRENT_OBJECT_POOL_H_
#define CYBER_BASE_CONCURRENT_OBJECT_POOL_H_

#include <atomic>
#include <cstdlib>
#include <cstring>
#include <iostream>
#include <memory>
#include <stdexcept>
#include <utility>

#include "cyber/base/for_each.h"
#include "cyber/base/macros.h"

namespace apollo {
namespace cyber {
namespace base {

// Study: The internal data structure is a linked list
// that behave like a stack while using a continuous memory
// The have a abstract free node list above the node list
template <typename T>
class CCObjectPool : public std::enable_shared_from_this<CCObjectPool<T>> {
public:
explicit CCObjectPool(uint32_t size);
virtual ~CCObjectPool();

// Study: It arguments is used to pass to the constructor of T
// If input is rvalue, pass reference to rvalue.
// If it is lvalue, it will be lvalue reference
template <typename... Args>
void ConstructAll(Args &&... args);

template <typename... Args>
std::shared_ptr<T> ConstructObject(Args &&... args);

std::shared_ptr<T> GetObject();
void ReleaseObject(T *);
uint32_t size() const;

private:
struct Node {
T object;
Node *next;
};

struct alignas(2 * sizeof(Node *)) Head {
uintptr_t count;
Node *node;
};

private:
CCObjectPool(CCObjectPool &) = delete;
CCObjectPool &operator=(CCObjectPool &) = delete;
bool FindFreeHead(Head *head);

std::atomic<Head> free_head_;
// Study: This is the place for storing the real object
Node *node_arena_ = nullptr;
uint32_t capacity_ = 0;
};

// Study: Initialize node_arena_
template <typename T>
CCObjectPool<T>::CCObjectPool(uint32_t size) : capacity_(size) {
node_arena_ = static_cast<Node *>(CheckedCalloc(capacity_, sizeof(Node)));
FOR_EACH(i, 0, capacity_ - 1) { node_arena_[i].next = node_arena_ + 1 + i; }
node_arena_[capacity_ - 1].next = nullptr;
free_head_.store({0, node_arena_}, std::memory_order_relaxed);
}

// Study: Using the same argument to create object and fill all the node_arena_
template <typename T>
template <typename... Args>
void CCObjectPool<T>::ConstructAll(Args &&... args) {
FOR_EACH(i, 0, capacity_) {
new (node_arena_ + i) T(std::forward<Args>(args)...);
}
}

// Study: This is a very danger implementation
// Since it allow other class get the object shared pointer
// However, it not destruct the object when pool destruct
// It mean that it will cause memory leak if type T have a heap-base member
// Maybe it assumed this pool will only destructed after all object is manually released
// of released by shared-pointer
template <typename T>
CCObjectPool<T>::~CCObjectPool() {
std::free(node_arena_);
}

template <typename T>
bool CCObjectPool<T>::FindFreeHead(Head *head) {
Head new_head;
Head old_head = free_head_.load(std::memory_order_acquire);
do {
// Study: Already at the tails
if (unlikely(old_head.node == nullptr)) {
return false;
}
new_head.node = old_head.node->next;
new_head.count = old_head.count + 1;
} while (!free_head_.compare_exchange_weak(old_head, new_head,
std::memory_order_acq_rel,
std::memory_order_acquire));
// Study: Get the free head, and move the free head
*head = old_head;
return true;
}

// Study: Get one resource, release it if that object no one if point to
template <typename T>
std::shared_ptr<T> CCObjectPool<T>::GetObject() {
Head free_head;
if (unlikely(!FindFreeHead(&free_head))) {
return nullptr;
}
auto self = this->shared_from_this();
return std::shared_ptr<T>(reinterpret_cast<T *>(free_head.node),
[self](T *object) { self->ReleaseObject(object); });
}

// Study: Get the first node in free node list.
template <typename T>
template <typename... Args>
std::shared_ptr<T> CCObjectPool<T>::ConstructObject(Args &&... args) {
Head free_head;
if (unlikely(!FindFreeHead(&free_head))) {
return nullptr;
}
auto self = this->shared_from_this();
T *ptr = new (free_head.node) T(std::forward<Args>(args)...);
return std::shared_ptr<T>(ptr, [self](T *object) {
object->~T();
self->ReleaseObject(object);
});
}

// Study: When an object is released, let it head relinked to the free node list.
template <typename T>
void CCObjectPool<T>::ReleaseObject(T *object) {
Head new_head;
Node *node = reinterpret_cast<Node *>(object);
Head old_head = free_head_.load(std::memory_order_acquire);
do {
node->next = old_head.node;
new_head.node = node;
new_head.count = old_head.count + 1;
} while (!free_head_.compare_exchange_weak(old_head, new_head,
std::memory_order_acq_rel,
std::memory_order_acquire));
}

} // namespace base
} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_CONCURRENT_OBJECT_POOL_H_

cyber/base/object_pool

The regular version of concurrent_object_pool without any concurrent promise.
The object create and destruction inside the pool will only happen in the pool constructor and destructor.
This is quite different to concurrent_object_pool.
Since concurrent_object_pool allow public call of release and create of an object.
And concurrent_object_pool doesn’t call object destructor in pool destructor.

Obviously that, the using assumption of this two class is different and they cannot be replace each other easily.

Apollo Cyber Study - Cyber Launch Python Script

cyber 的所謂dag很大程度是為了perception而存在的吧

Since its python code is simple, don’t commnet on it source code.
Just describe it overall concept

cyber_launch is the launcher of the apollo modules.
cyber_launch is actually just a python script, same as cyber_launch.py

Apollo first compile the main module as a shared library.
The wrap the path to the shared object and it required parameter to a xml file

Then use the cyber_launch to parse the xml file,
and start/stop the related modules.

How it handle ‘Start’

Parse Launch file. Then using the mainboard to launch if it is library. Otherwiser, launch it binary if it is Binary.

How it handle ‘Stop’

It is a simple pkill, it will pkill while using the launch file name as the regex
(It is actually killing the cyber_launch script with the target lanuch file as argument).

If haven’t provide the launch file, it will try to kill cyber_launch directly.
Since the cyber_launch have registered the atexit callback. It will also call stop to all its
child treads. Moreover, since the modules is launch as a Daemon thread of the cyber_launch script.
We can ensure that the corresponding modules is shutdown while the killing the cyber_launch script

Apollo Cyber Study (cyber/base 1)

我挺懷疑baidu的人在寫memory order的時侯, 是不是真的考慮清楚的

// Study : 後邊是研究記錄

cyber/base/macros.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_MACROS_H_
#define CYBER_BASE_MACROS_H_

#include <cstdlib>
#include <new>

// Study: __builtin_expect is a function to let the compiler know the chance of the condition to be true
// , the compiler can do optimization in assembly level using this information
#if __GNUC__ >= 3
#define likely(x) (__builtin_expect((x), 1))
#define unlikely(x) (__builtin_expect((x), 0))
#else
#define likely(x) (x)
#define unlikely(x) (x)
#endif

// Study: The size of cache line, cache line is the a block that will be fetch from memory to cache
#define CACHELINE_SIZE 64

// Study: Meta-programming, used SFINTE. When the T type have the `func` function, it will be true,
// Otherwise it is false (since size of int is not 1)
// basically used to determine the exist of the func for T in compile time
// Can be mixed with the stl traits
#define DEFINE_TYPE_TRAIT(name, func) \
template <typename T> \
class name { \
private: \
template <typename Class> \
static char Test(decltype(&Class::func)*); \
template <typename> \
static int Test(...); \
\
public: \
static constexpr bool value = sizeof(Test<T>(nullptr)) == 1; \
}; \
\
template <typename T> \
constexpr bool name<T>::value;

// Study: Call the processer to pause (no operation)
// The different of rep;nop; to nop;nop; is that processor can optimize with this
inline void cpu_relax() { asm volatile("rep; nop" ::: "memory"); }

// Study: Allocate memory
inline void* CheckedMalloc(size_t size) {
void* ptr = std::malloc(size);
if (!ptr) {
throw std::bad_alloc();
}
return ptr;
}

// Study: Allocate memory and Clean location
inline void* CheckedCalloc(size_t num, size_t size) {
void* ptr = std::calloc(num, size);
if (!ptr) {
throw std::bad_alloc();
}
return ptr;
}

#endif // CYBER_BASE_MACROS_H_

cyber/base/atomic_fifo.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_ATOMIC_FIFO_H_
#define CYBER_BASE_ATOMIC_FIFO_H_

#include <atomic>
#include <cstdlib>
#include <cstring>
#include <iostream>

#include "cyber/base/macros.h"

namespace apollo {
namespace cyber {

template <typename T>
class AtomicFIFO {
private:
struct Node {
T value;
};

public:
// Study: Singleton
static AtomicFIFO *GetInstance(int cap = 100) {
static AtomicFIFO *inst = new AtomicFIFO(cap);
return inst;
}

bool Push(const T &value);
bool Pop(T *value);
// insert();

private:
Node *node_arena_;
// Study: Align to maximize the memory access speed
alignas(CACHELINE_SIZE) std::atomic<uint32_t> head_;
alignas(CACHELINE_SIZE) std::atomic<uint32_t> commit_;
alignas(CACHELINE_SIZE) std::atomic<uint32_t> tail_;
int capacity_;

// Study: Only allow Singleton
explicit AtomicFIFO(int cap);
~AtomicFIFO();
AtomicFIFO(AtomicFIFO &) = delete;
AtomicFIFO &operator=(AtomicFIFO &) = delete;
};

template <typename T>
AtomicFIFO<T>::AtomicFIFO(int cap) : capacity_(cap) {
node_arena_ = static_cast<Node *>(malloc(capacity_ * sizeof(Node)));
memset(node_arena_, 0, capacity_ * sizeof(Node));

// Study: Set value to 0
head_.store(0, std::memory_order_relaxed);
tail_.store(0, std::memory_order_relaxed);
commit_.store(0, std::memory_order_relaxed);
}

template <typename T>
AtomicFIFO<T>::~AtomicFIFO() {
if (node_arena_ != nullptr) {
for (int i = 0; i < capacity_; i++) {
// Study: Call the T destructor manaully, since it is puted in the malloc region, it will not auto destruct
node_arena_[i].value.~T();
}
free(node_arena_);
}
}

template <typename T>
bool AtomicFIFO<T>::Push(const T &value) {
uint32_t oldt, newt;

// Study: Try push until success, return false if queue full
oldt = tail_.load(std::memory_order_acquire);
do {
uint32_t h = head_.load(std::memory_order_acquire);
uint32_t t = tail_.load(std::memory_order_acquire);

if (((t + 1) % capacity_) == h) return false;

newt = (oldt + 1) % capacity_;
// Study: If success, set tail_ to newt, otherwise set oldt to current tail_
// Ensure tails value sync
} while (!tail_.compare_exchange_weak(oldt, newt, std::memory_order_acq_rel,
std::memory_order_acquire));

(node_arena_ + oldt)->value = value;

// Study: commit_ is basically same as tail_, but it is used in pop.
// It can let the pop operation not block the push core part
while (unlikely(commit_.load(std::memory_order_acquire) != oldt)) cpu_relax();

// Study: After commit, this value can be pop in Pop()
commit_.store(newt, std::memory_order_release);

return true;
}

template <typename T>
bool AtomicFIFO<T>::Pop(T *value) {
uint32_t oldh, newh;

oldh = head_.load(std::memory_order_acquire);

// Study: Basically same logic as the push part, try pop until success. Return false if no element
do {
uint32_t h = head_.load(std::memory_order_acquire);
uint32_t c = commit_.load(std::memory_order_acquire);

if (h == c) return false;

newh = (oldh + 1) % capacity_;

*value = (node_arena_ + oldh)->value;
} while (!head_.compare_exchange_weak(oldh, newh, std::memory_order_acq_rel,
std::memory_order_acquire));

return true;
}

} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_ATOMIC_FIFO_H_

cyber/base/for_each.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_FOR_EACH_H_
#define CYBER_BASE_FOR_EACH_H_

#include <type_traits>

#include "cyber/base/macros.h"

namespace apollo {
namespace cyber {
namespace base {

// Study: A trait to check whether the class have impl operator <
DEFINE_TYPE_TRAIT(HasLess, operator<) // NOLINT

// Study: If both of them have impl <, use it
template <class Value, class End>
typename std::enable_if<HasLess<Value>::value && HasLess<End>::value,
bool>::type
LessThan(const Value& val, const End& end) {
return val < end;
}

// Study: Otherwise, check equality.....
// Actually, I thing this function name is misleading. This will only make sense when using in FOR_EACH
template <class Value, class End>
typename std::enable_if<!HasLess<Value>::value || !HasLess<End>::value,
bool>::type
LessThan(const Value& val, const End& end) {
return val != end;
}

// Study: Loop until end, Be careful that i should not be a index, it should be a iterator or something similiar
#define FOR_EACH(i, begin, end) \
for (auto i = (true ? (begin) : (end)); \
apollo::cyber::base::LessThan(i, (end)); ++i)

} // namespace base
} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_FOR_EACH_H_

cyber/base/wait_strategy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_WAIT_STRATEGY_H_
#define CYBER_BASE_WAIT_STRATEGY_H_

#include <chrono>
#include <condition_variable>
#include <cstdlib>
#include <mutex>
#include <thread>

namespace apollo {
namespace cyber {
namespace base {

class WaitStrategy {
public:
// Study: One waiter is allow to pass
virtual void NotifyOne() {}
// Study: All waiter is allow to pass
virtual void BreakAllWait() {}
// Study: Wait here
virtual bool EmptyWait() = 0;
virtual ~WaitStrategy() {}
};

// Study: Blocked until allow to pass
class BlockWaitStrategy : public WaitStrategy {
public:
BlockWaitStrategy() {}
void NotifyOne() override { cv_.notify_one(); }

bool EmptyWait() override {
std::unique_lock<std::mutex> lock(mutex_);
cv_.wait(lock);
return true;
}

void BreakAllWait() { cv_.notify_all(); }

private:
std::mutex mutex_;
std::condition_variable cv_;
};


// Study: Sleeped and pass after the sleep time
class SleepWaitStrategy : public WaitStrategy {
public:
SleepWaitStrategy() {}
explicit SleepWaitStrategy(uint64_t sleep_time_us)
: sleep_time_us_(sleep_time_us) {}

bool EmptyWait() override {
std::this_thread::sleep_for(std::chrono::microseconds(sleep_time_us_));
return true;
}

void SetSleepTimeMicroSecends(uint64_t sleep_time_us) {
sleep_time_us_ = sleep_time_us;
}

private:
uint64_t sleep_time_us_ = 10000;
};

// Study: Reschedule this thread, let other thread use first
class YieldWaitStrategy : public WaitStrategy {
public:
YieldWaitStrategy() {}
bool EmptyWait() override {
std::this_thread::yield();
return true;
}
};

// Study: Just pass??????
class BusySpinWaitStrategy : public WaitStrategy {
public:
BusySpinWaitStrategy() {}
bool EmptyWait() override { return true; }
};

// Study: Like BlockWaitStrategy, but have a time limit. Return false if timeout
class TimeoutBlockWaitStrategy : public WaitStrategy {
public:
TimeoutBlockWaitStrategy() {}
explicit TimeoutBlockWaitStrategy(uint64_t timeout)
: time_out_(std::chrono::milliseconds(timeout)) {}

void NotifyOne() override { cv_.notify_one(); }

bool EmptyWait() override {
std::unique_lock<std::mutex> lock(mutex_);
if (cv_.wait_for(lock, time_out_) == std::cv_status::timeout) {
return false;
}
return true;
}

void BreakAllWait() { cv_.notify_all(); }

void SetTimeout(uint64_t timeout) {
time_out_ = std::chrono::milliseconds(timeout);
}

private:
std::mutex mutex_;
std::condition_variable cv_;
std::chrono::milliseconds time_out_;
};

} // namespace base
} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_WAIT_STRATEGY_H_

cyber/base/atomic_hash_map.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
/******************************************************************************
* Copyright 2018 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/

#ifndef CYBER_BASE_ATOMIC_HASH_MAP_H_
#define CYBER_BASE_ATOMIC_HASH_MAP_H_

#include <stdint.h>
#include <atomic>
#include <type_traits>
#include <utility>

namespace apollo {
namespace cyber {
namespace base {

// Study: TableSize must be power of 2
/**
* @brief A implementation of lock-free fixed size hash map
*
* @tparam K Type of key, must be integral
* @tparam V Type of value
* @tparam 128 Size of hash table
* @tparam 0 Type traits, use for checking types of key & value
*/
template <typename K, typename V, std::size_t TableSize = 128,
typename std::enable_if<std::is_integral<K>::value &&
(TableSize & (TableSize - 1)) == 0,
int>::type = 0>
class AtomicHashMap {
public:
AtomicHashMap() : capacity_(TableSize), mode_num_(capacity_ - 1) {}
AtomicHashMap(const AtomicHashMap &other) = delete;
AtomicHashMap &operator=(const AtomicHashMap &other) = delete;

// Study: Just get the key, and perform like a regular hash map
// The atomic part is in Bucket and Entry
// It DOESN'T provide a hash function,
// so the user must be careful with the key selection
bool Has(K key) {
uint64_t index = key & mode_num_;
return table_[index].Has(key);
}

bool Get(K key, V **value) {
uint64_t index = key & mode_num_;
return table_[index].Get(key, value);
}

bool Get(K key, V *value) {
uint64_t index = key & mode_num_;
V *val = nullptr;
bool res = table_[index].Get(key, &val);
if (res) {
*value = *val;
}
return res;
}

void Set(K key) {
uint64_t index = key & mode_num_;
table_[index].Insert(key);
}

void Set(K key, const V &value) {
uint64_t index = key & mode_num_;
table_[index].Insert(key, value);
}

// Study: Support right value passing
void Set(K key, V &&value) {
uint64_t index = key & mode_num_;
table_[index].Insert(key, std::forward<V>(value));
}

private:
// Study: The nodes in Bucket, actually same as a regular Node.
// Just used atomic for value and next
struct Entry {
Entry() {}
explicit Entry(K key) : key(key) {
value_ptr.store(new V(), std::memory_order_release);
}
Entry(K key, const V &value) : key(key) {
value_ptr.store(new V(value), std::memory_order_release);
}
Entry(K key, V &&value) : key(key) {
value_ptr.store(new V(std::forward<V>(value)), std::memory_order_release);
}
~Entry() { delete value_ptr.load(std::memory_order_acquire); }

K key = 0;
std::atomic<V *> value_ptr = {nullptr};
std::atomic<Entry *> next = {nullptr};
};

// Study: The real place for storing all value for ONE key
// It is a linked list inside
// Moreover, the atomic is promised here and Entry
class Bucket {
public:
Bucket() : head_(new Entry()) {}
~Bucket() {
Entry *ite = head_;
while (ite) {
auto tmp = ite->next.load(std::memory_order_acquire);
delete ite;
ite = tmp;
}
}

// Study: Too simple, don't explain
bool Has(K key) {
Entry *m_target = head_->next.load(std::memory_order_acquire);
while (Entry *target = m_target) {
if (target->key < key) {
m_target = target->next.load(std::memory_order_acquire);
continue;
} else {
return target->key == key;
}
}
return false;
}

// Study: Loop and return the ptr to the key element
bool Find(K key, Entry **prev_ptr, Entry **target_ptr) {
Entry *prev = head_;
Entry *m_target = head_->next.load(std::memory_order_acquire);
while (Entry *target = m_target) {
if (target->key == key) {
*prev_ptr = prev;
*target_ptr = target;
return true;
} else if (target->key > key) {
*prev_ptr = prev;
*target_ptr = target;
return false;
} else {
prev = target;
m_target = target->next.load(std::memory_order_acquire);
}
}
*prev_ptr = prev;
*target_ptr = nullptr;
return false;
}

// Study: Keep insert until success
void Insert(K key, const V &value) {
Entry *prev = nullptr;
Entry *target = nullptr;
Entry *new_entry = new Entry(key, value);
V *new_value = new V(value);
while (true) {
if (Find(key, &prev, &target)) {
// key exists, update value
auto old_val_ptr = target->value_ptr.load(std::memory_order_acquire);
if (target->value_ptr.compare_exchange_strong(
old_val_ptr, new_value, std::memory_order_acq_rel,
std::memory_order_relaxed)) {
delete new_entry;
return;
}
continue;
} else {
new_entry->next.store(target, std::memory_order_release);
if (prev->next.compare_exchange_strong(target, new_entry,
std::memory_order_acq_rel,
std::memory_order_relaxed)) {
// Insert success
delete new_value;
return;
}
// another entry has been inserted, retry
}
}
}

// Study: Same as above. All the below code is also simple so no explain
void Insert(K key, V &&value) {
Entry *prev = nullptr;
Entry *target = nullptr;
Entry *new_entry = new Entry(key, value);
auto new_value = new V(std::forward<V>(value));
while (true) {
if (Find(key, &prev, &target)) {
// key exists, update value
auto old_val_ptr = target->value_ptr.load(std::memory_order_acquire);
if (target->value_ptr.compare_exchange_strong(
old_val_ptr, new_value, std::memory_order_acq_rel,
std::memory_order_relaxed)) {
delete new_entry;
return;
}
continue;
} else {
new_entry->next.store(target, std::memory_order_release);
if (prev->next.compare_exchange_strong(target, new_entry,
std::memory_order_acq_rel,
std::memory_order_relaxed)) {
// Insert success
delete new_value;
return;
}
// another entry has been inserted, retry
}
}
}

void Insert(K key) {
Entry *prev = nullptr;
Entry *target = nullptr;
Entry *new_entry = new Entry(key);
auto new_value = new V();
while (true) {
if (Find(key, &prev, &target)) {
// key exists, update value
auto old_val_ptr = target->value_ptr.load(std::memory_order_acquire);
if (target->value_ptr.compare_exchange_strong(
old_val_ptr, new_value, std::memory_order_acq_rel,
std::memory_order_relaxed)) {
delete new_entry;
return;
}
continue;
} else {
new_entry->next.store(target, std::memory_order_release);
if (prev->next.compare_exchange_strong(target, new_entry,
std::memory_order_acq_rel,
std::memory_order_relaxed)) {
// Insert success
delete new_value;
return;
}
// another entry has been inserted, retry
}
}
}

bool Get(K key, V **value) {
Entry *prev = nullptr;
Entry *target = nullptr;
if (Find(key, &prev, &target)) {
*value = target->value_ptr.load(std::memory_order_acquire);
return true;
}
return false;
}

Entry *head_;
};

private:
Bucket table_[TableSize];
uint64_t capacity_;
uint64_t mode_num_;
};

} // namespace base
} // namespace cyber
} // namespace apollo

#endif // CYBER_BASE_ATOMIC_HASH_MAP_H_

Review Job Skill

想說在過年前把工作上學到的事情簡單記錄一下
一直都在忙工作,沒有時間做題,慚愧………

工作簡述

在 Jingchi.ai 無人車開發團隊當工程師, 主要負責地圖建立及相關事項. 也幹很多其他有的無的.
也會做一下 定位 相關的事情. 算偏工程吧.

### 接觸到的領域

1. 機械人相關

  • 機械有甚麼不同模塊, 如何交換
  • 機械人如何動起來
  • 基本概念,知識
  • 如何同步及做用不同傳感器之數據
  • calibration

2. SLAM 相關

  • 建地圖要用到的知識, 數學基本
  • 常用技術
  • kalman filter, pose graph optimization
  • mapping algorithms

3. Computer Vision 相關

  • 不知道如何說起

4. Compile 相關

  • 如何compile及解決各種由此引出的問題

5. 工具開發, 優化相關

  • 就 c++17, 各種 memory optimize, multi-thread, 跟一些善用算法的加速

6. 前後端及deploy相關

  • 簡單事情, 沒有人做就我來做

7. Machine learning

  • 就看別人代碼, 跟聽他們分享…… 基本上我沒有用到

8. 工程化相關事項

反思

  • 不是computer vision出身, machine learning 也只是略懂
  • 買了一些相關的書, 還在慢慢看
  • 還是不懂別人怎找到不同的技術paper
  • 其實概念上還明白, 但數學細節上…
  • 基本知識要更深入, 要更了解這領域發展未來

Taiwan Experience

剛結束半年在台大的交換,把經歷感受記錄一下

  • 瘦了滿多的, 雖然每一餐食外食. 大概是因為我戒了三餐以外的零食吧
  • 台灣人有一種為國家出力的想法. 這一點和香港人差很多. 很多時候聽分享/講助, 他們都會提到台灣遇上的困阻, 而他們的工作是如何推動台灣發展.
  • 台灣人不太了解香港, 有人會覺得香港上不了facebook, youtube ….
  • 學生自發性較高, 作業很多實作. 不會有與業界脫節的問題, 而且老師時不時找外面公司來, 給予學生很多機會. 不過我想應該是大學限定
  • 除了讀書, 台大人還很會玩
  • 台灣的植物都特別綠. 不知道是天氣氣候, 還是其他原因.
  • 路很闊(某幾個縣市例外), 人少(跟香港比的話). 機車很多. 綠燈不代表沒有車了
  • 很適合玩滑板, 很慶幸自己學會了
    很喜歡台北的河濱, 早上的時候有迷人的花, 晚上有絕美的夜色. 適合散步思考
  • 手搖飲,小食店, cafe很多.
  • 很多技術講座,分享會. 如果香港也有就好了.
  • 步調比香港慢, 其實是香港太快了吧
  • 書店多, 很容易就找到想要的書
  • 自由度是比香港高
  • 沒有涼茶店 /_\
  • 健身風氣比香港重一點, 其實也不是差太多啦
  • 風景很美, 就是交通不方便
  • 對夜市,老街都沒有興趣
  • 很多廟
  • 野貓,野狗很多
  • 很少高樓
  • 經常下雨
  • 熱情

Microsoft ImagineHack 2017

fun

一直都想知道 hackathon 是甚麼一回事, 就參加一下這個, 勝負不太重要. 時長兩天, 中間要通宵工作

題目滿空泛的, 只要有關 social innovation, fintech 或 health 的作品就可以.

另外一定要用到 Microsoft Azure 的服務

簡單介紹一下

成品幾乎都是手機 App, 只有幾組做了chat bot.

差不多所有組, 技術上沒有甚麼突出之處. 不過時間也不多, 做簡單有噱頭的也很正常.

我那組真正在coding的只有我, 我很少做手機程式(我原本以為我可以找別人做UI, 我只做logic), 所以做了web app

題目是書本租借, 送書等. 用 flask 做網頁, 用 docker deploy

https://github.com/Bookeverflow/Bookeverflow

說實話, 沒有用到多少azure, 而UI上也滿差的(如果沒有專門設計的人, 我做的UI都很醜….), 功能也不夠突出.

大部份時間都用在 UI 及 database 上, 因為頁面及model有點多, 因為這偏向像一個平台, 沒有甚麼特出的功能.

反觀其他組都專著在核心功能上, 而最終產品都只有一,兩個頁面. 出來的效果都不錯.

反思

  • hackathon 還是手機程式佔優, 一來手機本身的UI就不錯, 二來容易當場展示, 比較吸引注意力
  • 技術不太重要, 題目及展示比較重要
  • 要專注一個功能, 不要做平台類的產品
  • 最好先找相熟的人參加, 包括一個設計人員, 商科的人反而不太重要
  • 一定要讓評判清楚看見成品

Job Interview - baidu && yahoo japan

終於到了要找工作的時間了, 我最先應徵的是百度(北京)及Yahoo Japan(日本), 兩間公司基本上都不看成績表的, 以下就記錄一下面試過程

百度(北京)

  • 由教授幫忙介紹, 聽說他朋友在入面職位很高, 應該有因此加分
  • 部門是user behavior study
  • 有三次面試, 因為我不在北京, 三次都是電話面試
  • 第一次問基本知識, 不難
  • 第二次比較深入, 問了演算法題及機械學習的知識, 問到了 svm, covariance
  • 第三次是更難的演算法題及要你分析一下某個情景
  • 最後我到北京感受一下他們的工作
  • 結果: 得到offer, 不過我比較想找偏工程的職位就拒絕了

Yahoo Japan

  • 網上申請
  • 要先通過一次網上的programming test. 我除了最後一題, 基本上都做出來了.
  • 通過之後, 他會電郵通知面試
  • 他總共有2次面試, 2次對上不同人
  • 第一次面試對上3個日本人
    • 內容圍繞CV上有寫的能力, 以前的項目
    • 千萬不要把不熟的項目寫上去, 不然答不出來更糟
    • 另外問了滿多技術無關的問題, 如你有甚麼弱點, 你朋友如何評價你, 等
    • 主要會由一個考官發問, 而其他人會用日文交談,我也不懂他們在說甚麼
  • 第二次面試和第一次面試差不多, 只是換了另外三個日本人
  • 估計是不同部門分組面試
  • 我在非技術問題上答得滿差的, 而在技術問題上也有很多細節說不出
  • 結果: 無聲卡

還是要繼續找工作, 煩惱有甚麼公司可以找.