閱讀621 返回首頁    go 阿裏雲 go 技術社區[雲棲]


詳解Twitter開源分布式自增ID算法snowflake,附演算驗證過程

1.snowflake簡介

    互聯網快速發展的今天,分布式應用係統已經見怪不怪,在分布式係統中,我們需要各種各樣的ID,既然是ID那麼必然是要保證全局唯一,除此之外,不同當業務還需要不同的特性,比如像並發巨大的業務要求ID生成效率高,吞吐大;比如某些銀行類業務,需要按每日日期製定交易流水號;又比如我們希望用戶的ID是隨機的,無序的,純數字的,且位數長度是小於10位的。等等,不同的業務場景需要的ID特性各不一樣,於是,衍生了各種ID生成器,但大多數利用數據庫控製ID的生成,性能受數據庫並發能力限製,那麼有沒有一款不需要依賴任何中間件(如數據庫,分布式緩存服務等)的ID生成器呢?本著取之於開源,用之於開源的原則,今天,特此介紹Twitter開源的一款分布式自增ID算法snowflake,並附上算法原理推導和演算過程!

snowflake算法是一款本地生成的(ID生成過程不依賴任何中間件,無網絡通信),保證ID全局唯一,並且ID總體有序遞增,性能每秒生成300w+。

2.snowflake算法原理

snowflake生產的ID二進製結構表示如下(每部分用-分開):
0 - 00000000 00000000 00000000 00000000 00000000 0 - 00000 - 00000 - 00000000 0000

第一位未使用,接下來的41位為毫秒級時間(41位的長度可以使用69年,從1970-01-01 08:00:00),然後是5位datacenterId(最大支持2^5=32個,二進製表示從00000-11111,也即是十進製0-31),和5位workerId(最大支持2^5=32個,原理同datacenterId),所以datacenterId*workerId最多支持部署1024個節點,最後12位是毫秒內的計數(12位的計數順序號支持每個節點每毫秒產生2^12=4096個ID序號).

所有位數加起來共64位,恰好是一個Long型(轉換為字符串長度為18).

單台機器實例,通過時間戳保證前41位是唯一的,分布式係統多台機器實例下,通過對每個機器實例分配不同的datacenterId和workerId避免中間的10位碰撞。最後12位每毫秒從0遞增生產ID,再提一次:每毫秒最多生成4096個ID,每秒可達4096000個。理論上,隻要CPU計算能力足夠,單機每秒可生產400多萬個,實測300w+,效率之高由此可見。

(該節改編自:https://www.cnblogs.com/relucent/p/4955340.html)

3.snowflake算法源碼(java版)
[java] view plain copy
@ToString
@Slf4j
public class SnowflakeIdFactory {

private final long twepoch = 1288834974657L;  
private final long workerIdBits = 5L;  
private final long datacenterIdBits = 5L;  
private final long maxWorkerId = -1L ^ (-1L << workerIdBits);  
private final long maxDatacenterId = -1L ^ (-1L << datacenterIdBits);  
private final long sequenceBits = 12L;  
private final long workerIdShift = sequenceBits;  
private final long datacenterIdShift = sequenceBits + workerIdBits;  
private final long timestampLeftShift = sequenceBits + workerIdBits + datacenterIdBits;  
private final long sequenceMask = -1L ^ (-1L << sequenceBits);  

private long workerId;  
private long datacenterId;  
private long sequence = 0L;  
private long lastTimestamp = -1L;  



public SnowflakeIdFactory(long workerId, long datacenterId) {  
    if (workerId > maxWorkerId || workerId < 0) {  
        throw new IllegalArgumentException(String.format("worker Id can't be greater than %d or less than 0", maxWorkerId));  
    }  
    if (datacenterId > maxDatacenterId || datacenterId < 0) {  
        throw new IllegalArgumentException(String.format("datacenter Id can't be greater than %d or less than 0", maxDatacenterId));  
    }  
    this.workerId = workerId;  
    this.datacenterId = datacenterId;  
}  

public synchronized long nextId() {  
    long timestamp = timeGen();  
    if (timestamp < lastTimestamp) {  
        //服務器時鍾被調整了,ID生成器停止服務.  
        throw new RuntimeException(String.format("Clock moved backwards.  Refusing to generate id for %d milliseconds", lastTimestamp - timestamp));  
    }  
    if (lastTimestamp == timestamp) {  
        sequence = (sequence + 1) & sequenceMask;  
        if (sequence == 0) {  
            timestamp = tilNextMillis(lastTimestamp);  
        }  
    } else {  
        sequence = 0L;  
    }  

    lastTimestamp = timestamp;  
    return ((timestamp - twepoch) << timestampLeftShift) | (datacenterId << datacenterIdShift) | (workerId << workerIdShift) | sequence;  
}  

protected long tilNextMillis(long lastTimestamp) {  
    long timestamp = timeGen();  
    while (timestamp <= lastTimestamp) {  
        timestamp = timeGen();  
    }  
    return timestamp;  
}  

protected long timeGen() {  
    return System.currentTimeMillis();  
}  

public static void testProductIdByMoreThread(int dataCenterId, int workerId, int n) throws InterruptedException {  
    List<Thread> tlist = new ArrayList<>();  
    Set<Long> setAll = new HashSet<>();  
    CountDownLatch cdLatch = new CountDownLatch(10);  
    long start = System.currentTimeMillis();  
    int threadNo = dataCenterId;  
    Map<String,SnowflakeIdFactory> idFactories = new HashMap<>();  
    for(int i=0;i<10;i++){  
        //用線程名稱做map key.  
        idFactories.put("snowflake"+i,new SnowflakeIdFactory(workerId, threadNo++));  
    }  
    for(int i=0;i<10;i++){  
        Thread temp =new Thread(new Runnable() {  
            @Override  
            public void run() {  
                Set<Long> setId = new HashSet<>();  
                SnowflakeIdFactory idWorker = idFactories.get(Thread.currentThread().getName());  
                for(int j=0;j<n;j++){  
                    setId.add(idWorker.nextId());  
                }  
                synchronized (setAll){  
                    setAll.addAll(setId);  
                    log.info("{}生產了{}個id,並成功加入到setAll中.",Thread.currentThread().getName(),n);  
                }  
                cdLatch.countDown();  
            }  
        },"snowflake"+i);  
        tlist.add(temp);  
    }  
    for(int j=0;j<10;j++){  
        tlist.get(j).start();  
    }  
    cdLatch.await();  

    long end1 = System.currentTimeMillis() - start;  

    log.info("共耗時:{}毫秒,預期應該生產{}個id, 實際合並總計生成ID個數:{}",end1,10*n,setAll.size());  

}  

public static void testProductId(int dataCenterId, int workerId, int n){  
    SnowflakeIdFactory idWorker = new SnowflakeIdFactory(workerId, dataCenterId);  
    SnowflakeIdFactory idWorker2 = new SnowflakeIdFactory(workerId+1, dataCenterId);  
    Set<Long> setOne = new HashSet<>();  
    Set<Long> setTow = new HashSet<>();  
    long start = System.currentTimeMillis();  
    for (int i = 0; i < n; i++) {  
        setOne.add(idWorker.nextId());//加入set  
    }  
    long end1 = System.currentTimeMillis() - start;  
    log.info("第一批ID預計生成{}個,實際生成{}個<<<<*>>>>共耗時:{}",n,setOne.size(),end1);  

    for (int i = 0; i < n; i++) {  
        setTow.add(idWorker2.nextId());//加入set  
    }  
    long end2 = System.currentTimeMillis() - start;  
    log.info("第二批ID預計生成{}個,實際生成{}個<<<<*>>>>共耗時:{}",n,setTow.size(),end2);  

    setOne.addAll(setTow);  
    log.info("合並總計生成ID個數:{}",setOne.size());  

}  

public static void testPerSecondProductIdNums(){  
    SnowflakeIdFactory idWorker = new SnowflakeIdFactory(1, 2);  
    long start = System.currentTimeMillis();  
    int count = 0;  
    for (int i = 0; System.currentTimeMillis()-start<1000; i++,count=i) {  
        /**  測試方法一: 此用法純粹的生產ID,每秒生產ID個數為300w+ */  
        idWorker.nextId();  
        /**  測試方法二: 在log中打印,同時獲取ID,此用法生產ID的能力受限於log.error()的吞吐能力. 
         * 每秒徘徊在10萬左右. */  
        //log.error("{}",idWorker.nextId());  
    }  
    long end = System.currentTimeMillis()-start;  
    System.out.println(end);  
    System.out.println(count);  
}  

public static void main(String[] args) {  
    /** case1: 測試每秒生產id個數? 
     *   結論: 每秒生產id個數300w+ */  
    //testPerSecondProductIdNums();  

    /** case2: 單線程-測試多個生產者同時生產N個id,驗證id是否有重複? 
     *   結論: 驗證通過,沒有重複. */  
    //testProductId(1,2,10000);//驗證通過!  
    //testProductId(1,2,20000);//驗證通過!  

    /** case3: 多線程-測試多個生產者同時生產N個id, 全部id在全局範圍內是否會重複? 
     *   結論: 驗證通過,沒有重複. */  
    try {  
        testProductIdByMoreThread(1,2,100000);//單機測試此場景,性能損失至少折半!  
    } catch (InterruptedException e) {  
        e.printStackTrace();  
    }  

}  

}

測試用例:
/** case1: 測試每秒生產id個數?

  • 結論: 每秒生產id個數300w+ */ //testPerSecondProductIdNums();

/** case2: 單線程-測試多個生產者同時生產N個id,驗證id是否有重複?

  • 結論: 驗證通過,沒有重複. */ //testProductId(1,2,10000);//驗證通過! //testProductId(1,2,20000);//驗證通過!

/** case3: 多線程-測試多個生產者同時生產N個id, 全部id在全局範圍內是否會重複?

  • 結論: 驗證通過,沒有重複. */ try { testProductIdByMoreThread(1,2,100000);//單機測試此場景,性能損失至少折半! } catch (InterruptedException e) { e.printStackTrace(); }

4.snowflake算法推導和演算過程
說明:
演算使用的對象實例:SnowflakeIdFactory idWorker = new SnowflakeIdFactory(1, 2);
運行時數據workerId=1,datacenterId=2,分別表示機器實例的生產者編號,數據中心編號;
sequence=0表示每毫秒生產ID從0開始計數遞增;
以下演算基於時間戳=1482394743339時刻進行推導。

一句話描述:以下演算模擬了1482394743339這一毫秒時刻,workerId=1,datacenterId=2的id生成器,生產第一個id的過程。

end!
參考
https://github.com/twitter/snowflake

https://www.cnblogs.com/relucent/p/4955340.html

轉自:https://blog.csdn.net/li396864285/article/details/54668031

最後更新:2017-10-27 11:04:30

  上一篇:go  關於Expression Tree和IL Emit的所謂的"性能差別"
  下一篇:go  一句代碼實現批量數據綁定[下篇]