转自:http://blog.csdn.net/minCrazy/article/details/40791795
多线程间计数操作、共享状态或者统计相关时间次数,这些都需要在多线程之间共享变量和修改变量,如此就需要在多线程间对该变量进行互斥操作和访问。
通常遇到多线程互斥的问题,首先想到的就是加锁lock,通过加互斥锁来进行线程间互斥,但是最近有看一些开源的项目,看到有一些同步读和操作的原子操作函数——__sync_fetch_and_add系列的命令,然后自己去网上查找一番,找到一篇博文有介绍这系列函数,学习一番后记录下来。
首先,C/C++程序中count++这种操作不是原子的,一个自加操作,本质上分为3步:
- 从缓存取到寄存器
- 在寄存器内加1
- 再存入缓存
- #include <stdio.h>
- #include <stdlib.h>
- #include <unistd.h>
- #include <errno.h>
- #include <pthread.h>
- #include <sched.h>
- #include <linux/unistd.h>
- #include <sys/syscall.h>
- #include <linux/types.h>
- #include <time.h>
- #include <sys/time.h>
- #define INC_TO 1000000 // one million
- __u64 rdtsc ()
- {
- __u32 lo, hi;
- __asm__ __volatile__
- (
- "rdtsc":"=a"(lo),"=d"(hi)
- );
- return (__u64)hi << 32 | lo;
- }
- int global_int = 0;
- pthread_mutex_t count_lock = PTHREAD_MUTEX_INITIALIZER;//初始化互斥锁
- pid_t gettid ()
- {
- return syscall(__NR_gettid);
- }
- void * thread_routine1 (void *arg)
- {
- int i;
- int proc_num = (int)(long)arg;
- __u64 begin, end;
- struct timeval tv_begin, tv_end;
- __u64 time_interval;
- cpu_set_t set;
- CPU_ZERO(&set);
- CPU_SET(proc_num, &set);
- if (sched_setaffinity(gettid(), sizeof(cpu_set_t), &set))
- {
- fprintf(stderr, "failed to set affinity\n");
- return NULL;
- }
- begin = rdtsc();
- gettimeofday(&tv_begin, NULL);
- for (i = 0; i < INC_TO; i++)
- {
- __sync_fetch_and_add(&global_int, 1);
- }
- gettimeofday(&tv_end, NULL);
- end = rdtsc();
- time_interval = (tv_end.tv_sec - tv_begin.tv_sec) * 1000000 + (tv_end.tv_usec - tv_begin.tv_usec);
- fprintf(stderr, "proc_num : %d, __sync_fetch_and_add cost %llu CPU cycle, cost %llu us\n", proc_num, end - begin, time_interval);
- return NULL;
- }
- void *thread_routine2(void *arg)
- {
- int i;
- int proc_num = (int)(long)arg;
- __u64 begin, end;
- struct timeval tv_begin, tv_end;
- __u64 time_interval;
- cpu_set_t set;
- CPU_ZERO(&set);
- CPU_SET(proc_num, &set);
- if (sched_setaffinity(gettid(), sizeof(cpu_set_t), &set))
- {
- fprintf(stderr, "failed to set affinity\n");
- return NULL;
- }
- begin = rdtsc();
- gettimeofday(&tv_begin, NULL);
- for (i = 0; i < INC_TO; i++)
- {
- pthread_mutex_lock(&count_lock);
- global_int++;
- pthread_mutex_unlock(&count_lock);
- }
- gettimeofday(&tv_end, NULL);
- end = rdtsc();
- time_interval = (tv_end.tv_sec - tv_begin.tv_sec) * 1000000 + (tv_end.tv_usec - tv_begin.tv_usec);
- fprintf(stderr, "proc_num : %d, pthread_mutex_lock cost %llu CPU cycle, cost %llu us\n", proc_num, end - begin, time_interval);
- return NULL;
- }
- void *thread_routine3(void *arg)
- {
- int i;
- int proc_num = (int)(long)arg;
- __u64 begin, end;
- struct timeval tv_begin, tv_end;
- __u64 time_interval;
- cpu_set_t set;
- CPU_ZERO(&set);
- CPU_SET(proc_num, &set);
- if (sched_setaffinity(gettid(), sizeof(cpu_set_t), &set))
- {
- fprintf(stderr, "failed to set affinity\n");
- return NULL;
- }
- begin = rdtsc();
- gettimeofday(&tv_begin, NULL);
- for (i = 0; i < INC_TO; i++)
- {
- global_int++;
- }
- gettimeofday(&tv_end, NULL);
- end = rdtsc();
- time_interval = (tv_end.tv_sec - tv_begin.tv_sec) * 1000000 + (tv_end.tv_usec - tv_begin.tv_usec);
- fprintf(stderr, "proc_num : %d, no lock cost %llu CPU cycle, cost %llu us\n", proc_num, end - begin, time_interval);
- return NULL;
- }
- int main()
- {
- int procs = 0;
- int all_cores = 0;
- int i;
- pthread_t *thrs;
- procs = (int)sysconf(_SC_NPROCESSORS_ONLN);
- if (procs < 0)
- {
- fprintf(stderr, "failed to fetch available CPUs(Cores)\n");
- return -1;
- }
- all_cores = (int)sysconf(_SC_NPROCESSORS_CONF);
- if (all_cores < 0)
- {
- fprintf(stderr, "failed to fetch system configure CPUs(Cores)\n");
- return -1;
- }
- printf("system configure CPUs(Cores): %d\n", all_cores);
- printf("system available CPUs(Cores): %d\n", procs);
- thrs = (pthread_t *)malloc(sizeof(pthread_t) * procs);
- if (thrs == NULL)
- {
- fprintf(stderr, "failed to malloc pthread array\n");
- return -1;
- }
- printf("starting %d threads...\n", procs);
- for (i = 0; i < procs; i++)
- {
- if (pthread_create(&thrs[i], NULL, thread_routine1, (void *)(long) i))
- {
- fprintf(stderr, "failed to pthread create\n");
- procs = i;
- break;
- }
- }
- for (i = 0; i < procs; i++)
- {
- pthread_join(thrs[i], NULL);
- }
- printf("after doing all the math, global_int value is: %d\n", global_int);
- printf("expected value is: %d\n", INC_TO * procs);
- free (thrs);
- return 0;
- }
- system configure CPUs(Cores): 8
- system available CPUs(Cores): 8
- starting 8 threads...
- proc_num : 5, no lock cost 158839371 CPU cycle, cost 66253 us
- proc_num : 6, no lock cost 163866879 CPU cycle, cost 68351 us
- proc_num : 2, no lock cost 173866203 CPU cycle, cost 72521 us
- proc_num : 7, no lock cost 181006344 CPU cycle, cost 75500 us
- proc_num : 1, no lock cost 186387174 CPU cycle, cost 77728 us
- proc_num : 0, no lock cost 186698304 CPU cycle, cost 77874 us
- proc_num : 3, no lock cost 196089462 CPU cycle, cost 81790 us
- proc_num : 4, no lock cost 200366793 CPU cycle, cost 83576 us
- after doing all the math, global_int value is: 1743884
- expected value is: 8000000
- system configure CPUs(Cores): 8
- system available CPUs(Cores): 8
- starting 8 threads...
- proc_num : 1, pthread_mutex_lock cost 9752929875 CPU cycle, cost 4068121 us
- proc_num : 5, pthread_mutex_lock cost 10038570354 CPU cycle, cost 4187272 us
- proc_num : 7, pthread_mutex_lock cost 10041209091 CPU cycle, cost 4188374 us
- proc_num : 0, pthread_mutex_lock cost 10044102546 CPU cycle, cost 4189546 us
- proc_num : 6, pthread_mutex_lock cost 10113533973 CPU cycle, cost 4218541 us
- proc_num : 4, pthread_mutex_lock cost 10117540197 CPU cycle, cost 4220212 us
- proc_num : 3, pthread_mutex_lock cost 10160384391 CPU cycle, cost 4238083 us
- proc_num : 2, pthread_mutex_lock cost 10164464784 CPU cycle, cost 4239778 us
- after doing all the math, global_int value is: 8000000
- expected value is: 8000000
- system configure CPUs(Cores): 8
- system available CPUs(Cores): 8
- starting 8 threads...
- proc_num : 3, __sync_fetch_and_add cost 2364148575 CPU cycle, cost 986129 us
- proc_num : 1, __sync_fetch_and_add cost 2374990974 CPU cycle, cost 990652 us
- proc_num : 2, __sync_fetch_and_add cost 2457930267 CPU cycle, cost 1025247 us
- proc_num : 5, __sync_fetch_and_add cost 2463027030 CPU cycle, cost 1027373 us
- proc_num : 7, __sync_fetch_and_add cost 2532240981 CPU cycle, cost 1056244 us
- proc_num : 4, __sync_fetch_and_add cost 2555055054 CPU cycle, cost 1065760 us
- proc_num : 0, __sync_fetch_and_add cost 2561248971 CPU cycle, cost 1068331 us
- proc_num : 6, __sync_fetch_and_add cost 2558781396 CPU cycle, cost 1067314 us
- after doing all the math, global_int value is: 8000000
- expected value is: 8000000
结果表明,正确结果为8000000,而实际为1743884。表明多线程下修改全局计数,不加锁的话是错误的;
2. 加锁情况下,无论是线程锁还是原子性操作,均可获得正确结果。
3. 性能上__sync_fetch_and_add()完爆线程锁。
从性能测试结果上看,__sync_fetch_and_add()速度大致是线程锁的4-5倍。
类型 | 平均CPU周期(circle) | 平均耗时(us) |
---|---|---|
不加锁 | 180890066 | 75449.13 |
线程锁 | 10054091901 | 4193740.875 |
原子操作 | 2483427906 | 1035881.25 |
注:如上的性能测试结果,表明__sync_fetch_and_add()速度大致是线程锁的4-5倍,而并非文献【1】中6-7倍。由此,怀疑可能是由不同机器、不同CPU导致的,上述测试是在一台8core的虚拟机上实验的。为此,我又在不同的机器上重复相同的测试。
24cores实体机测试结果,表明__sync_fetch_and_add()速度大致只有线程锁的2-3倍。
类型 | 平均CPU周期(circle) | 平均耗时(us) |
---|---|---|
不加锁 | 535457026 | 233310.5 |
线程锁 | 9331915480 | 4066156.667 |
原子操作 | 3769900795 | 1643463.625 |
总体看来,原子操作__sync_fetch_and_add()大大的优于线程锁。
另外:
上面介绍的原子操作参数里都有可扩展参数(...)用来指出哪些变量需要memory barrier,因为目前gcc实现的是full barrier(类似 kernel中的mb(),表示这个操作之前的所有内存操作不会被重排到这个操作之后),所以可以忽略掉这个参数。下面是有关memory barrier的东西。
关于memory barrier, cpu会对我们的指令进行排序,一般说来会提高程序的效率,但有时候可能造成我们不希望看到的结果。举例说明,比如我们有一硬件设备,当你发出一个操作指令的时候,一个寄存器存的是你的操作指令(READ),两个寄存器存的是参数(比如地址和size),最后一个寄存器是控制寄存器,在所有的参数都设置好后向其发出指令,设备开始读取参数,执行命令,程序可能如下:
write1(dev.register_size, size); write1(dev.register_addr, addr); write1(dev.register_cmd, READ); write1(dev.register_control, );