我的流表 实际接收了 超800万条的数据, 可目前能看见的 只有 700万条。
在 dolphindb.cfg (单服务器,单节点)中配置如下:
#发布节点配置
maxPubConnections=20
persistenceDir=C:/DevTools/DolphinDB/Data/Streaming
persistenceWorkerNum=1
maxPersistenceQueueDepth=10000000
maxMsgNumPerBlock=1024
maxPubQueueDepthPerSite=10000000
startup=C:/DevTools/DolphinDB/startup.dos
#订阅节点配置
subPort=10777
subExecutors=0
maxSubConnections=64
subExecutorPooling=false
maxSubQueueDepth=10000000
在 startup.dos 中内容如下:
login(`admin, `123456);
//////// ticks 表
/////// 表的 各 列 和 数据类型, 并建一张空表
tbColNames=
`TradingDay`InstrumentID`ExchangeID`ExchangeInstID`LastPrice`Volume`Amount`OpenPosition`PreSettlementPrice`PreClosePrice`PreOpenInterest`OpenPrice`HighestPrice`LowestPrice`TotalVolume`TotalTurnover`OpenInterest`ClosePrice`SettlementPrice`UpperLimitPrice`LowerLimitPrice`ActionTime`RecvTime`BidPrice1`BidVolume1`AskPrice1`AskVolume1`BidPrice2`BidVolume2`AskPrice2`AskVolume2`BidPrice3`BidVolume3`AskPrice3`AskVolume3`BidPrice4`BidVolume4`AskPrice4`AskVolume4`BidPrice5`BidVolume5`AskPrice5`AskVolume5`AveragePrice`PreDelta`CurrDelta`RecordNo`TotalRecordNo`InDbTime
tbColTypes=[DATE,SYMBOL,SYMBOL,SYMBOL,DOUBLE,INT,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,INT,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,TIMESTAMP, TIMESTAMP,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,DOUBLE,DOUBLE,INT,INT,TIMESTAMP] ;
///////// 建空表
tbTicks = streamTable(100000:0,tbColNames,tbColTypes) ;
////////// 设定流数据表 (tbTicks) 可持久化 并 共享
enableTableShareAndPersistence(table=tbTicks, tableName=`ticks, cacheSize=7000000, retentionMinutes=4320 )
在GUI操作台上,执行如下操作
login(`admin, `123456)
/////////// 查看共享表 (ticks)
count(ticks) /// 输出结果: 7013355
////// 订阅表
////// 表的 各 列 和 数据类型, 并建一张空表
tbColNames=
`TradingDay`InstrumentID`ExchangeID`ExchangeInstID`LastPrice`Volume`Amount`OpenPosition`PreSettlementPrice`PreClosePrice`PreOpenInterest`OpenPrice`HighestPrice`LowestPrice`TotalVolume`TotalTurnover`OpenInterest`ClosePrice`SettlementPrice`UpperLimitPrice`LowerLimitPrice`ActionTime`RecvTime`BidPrice1`BidVolume1`AskPrice1`AskVolume1`BidPrice2`BidVolume2`AskPrice2`AskVolume2`BidPrice3`BidVolume3`AskPrice3`AskVolume3`BidPrice4`BidVolume4`AskPrice4`AskVolume4`BidPrice5`BidVolume5`AskPrice5`AskVolume5`AveragePrice`PreDelta`CurrDelta`RecordNo`TotalRecordNo`InDbTime
tbColTypes=[DATE,SYMBOL,SYMBOL,SYMBOL,DOUBLE,INT,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,INT,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,DOUBLE,TIMESTAMP, TIMESTAMP,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,INT,DOUBLE,DOUBLE,DOUBLE,INT,INT,TIMESTAMP]
///////// 建空表
share streamTable(10000:0,tbColNames,tbColTypes) as subTb1
///////////// 订阅 本节点 表
topic1 = subscribeTable(, `ticks, `action_View_01, -1, subTb1, true)
topic1 /// 输出结果: localhost:8848:local8848/ticks/action_View_01
count(subTb1) /// 输出结果: 0
getStreamingStat().pubTables //查看发布表
////// 输出结果:
////// tableName, subscriber, msgOffset, actions
////// ticks, localhost:10777, 8651755, action_View_01
getStreamingStat().pubConns //发布链接情况
////// 输出结果:
////// client, queDepthLimit, queueDepth, tables
////// localhost:10777, 10000000, 0, ticks
getStreamingStat().subWorkers //查询订阅状态
////// 没有有效数据
getStreamingStat().subConns //订阅链接数
////// 没有有效数据
我想要读取 已经被持久化的 100万条记录,该怎么办?
还是我上述的操作有什么问题?
谢谢