外文翻譯--機(jī)器人傳感器的網(wǎng)絡(luò)-25頁[中英word]【中英文文獻(xiàn)譯文】
外文翻譯--機(jī)器人傳感器的網(wǎng)絡(luò)-25頁[中英word]【中英文文獻(xiàn)譯文】,中英word,中英文文獻(xiàn)譯文,外文,翻譯,機(jī)器人,傳感器,網(wǎng)絡(luò),25,word,中英文,文獻(xiàn),譯文
附錄Ⅰ
機(jī)器人傳感器的網(wǎng)絡(luò)
一般的機(jī)械手工程自動(dòng)化測(cè)知和知覺實(shí)驗(yàn)室
賓夕凡尼亞州,費(fèi)城, PA 的大學(xué), 美國(guó)
摘要:以知覺的數(shù)據(jù)從分配的視覺系統(tǒng)吸取了的 exteroceptive 為基礎(chǔ)的硬未知的物體和同時(shí)追蹤的二維歐幾里得幾何的空間的這一個(gè)紙住址即時(shí)位置的問題和被網(wǎng)絡(luò)的移動(dòng)機(jī)械手的定方位判斷. 對(duì)于隊(duì)局限的充份和必需的情況被計(jì)劃. 以統(tǒng)計(jì)的操作員和曲線圖搜索運(yùn)算法則為基礎(chǔ)的一個(gè)局限和物體追蹤方式為與異種的感應(yīng)器一起本土化的一隊(duì)機(jī)械手被呈現(xiàn). 方式在有被裝備全方向的錄像機(jī)和 IEEE 802.11b 無線網(wǎng)路的像汽車一樣移動(dòng)的機(jī)械手的一個(gè)實(shí)驗(yàn)的月臺(tái)被實(shí)現(xiàn). 實(shí)驗(yàn)的結(jié)果使方式有效.
關(guān)鍵詞:合作的局限; 多機(jī)械手形成; 分配了感應(yīng)器網(wǎng)絡(luò); 感應(yīng)器數(shù)據(jù)融合物
介紹
以使一隊(duì)移動(dòng)的機(jī)械手自治地在一些里面航行需要了形成而且更進(jìn)一步運(yùn)行像監(jiān)視和目標(biāo)獲得這樣的合作工作, 他們一定能夠以形成和一個(gè)全球的叁考框架本土化他們自己 [1,2]. 因此, 該如何估計(jì)機(jī)械手的位置和定方位 (姿勢(shì)) 以精確的和有效率的方式是特別興趣. 我們對(duì)這一張紙的興趣是在二空間的特別歐幾里得幾何的空間 SE(2) 中本土化一隊(duì)異種的機(jī)械手和用從異種的感應(yīng)器被獲得的數(shù)據(jù)本土化目標(biāo). 明確地, 我們對(duì)情況感興趣為哪一個(gè)所有的機(jī)械手以形成能被本土化包圍. 我們的局限方式接近地被講到被呈現(xiàn)的那些. 在某種意義上機(jī)械手用來自它自己的感應(yīng)器的那一個(gè)來臨可以使用他們的隊(duì)友感應(yīng)器數(shù)據(jù) (或一些被相關(guān)的數(shù)據(jù)) 和數(shù)據(jù). 在那一張紙, 機(jī)械手使用分配了測(cè)知改善自己的局限或目標(biāo)局限 [1]. 因?yàn)楸容^多機(jī)械手局限問題藉由聯(lián)合被使用最少的機(jī)械手交換的數(shù)據(jù)一致最佳化,過濾器的兩者文件的方法學(xué)已經(jīng)呈現(xiàn)解決. 最近的文學(xué)使用曲線圖做模型感應(yīng)器網(wǎng)絡(luò)和合作的控制方案 [5, 6]. 在曲線圖硬理論上的結(jié)果 [7-9] 可能是直接地在 R2 適用于多機(jī)械手系統(tǒng) (裁判員. [10, 11]). 然而,相對(duì)一點(diǎn)的注意已經(jīng)一起支付到網(wǎng)絡(luò)生觀察, 這對(duì)照相機(jī)的網(wǎng)絡(luò)是特別地重要的. 這一張紙?jiān)谝越y(tǒng)計(jì)的操作員和簡(jiǎn)單的曲線圖搜索運(yùn)算法則為基礎(chǔ)的 SE(2) 為隊(duì)局限和物體追蹤呈現(xiàn)不同的方式. 此外, 不同的早先方法, 我們?cè)谝粋€(gè)如此的方法中制定問題隊(duì)局限的問題和物體追蹤能被相同的運(yùn)算法則解決. 我們也表示 , 被講到早先作品的最優(yōu)性的利益能如何容易地在我們的方法被吸收. 為例, 這一張紙表示該如何合并一個(gè)廣大的過濾器 (EKF) 改善物體追蹤. 在結(jié)束, 使我們的方法有效, 我們出示實(shí)驗(yàn)的結(jié)果以有被裝備全方向的錄像機(jī)和一個(gè) IEEE 802.11b 無線網(wǎng)路的一群五個(gè)像汽車一樣的自治機(jī)械手 (見到 Fig.1) 我們承擔(dān)一個(gè)全方向的發(fā)射器和接收器和每個(gè)機(jī)械手能聽的每個(gè)機(jī)械手有
每隔一機(jī)械手. 因此, 所有的機(jī)械手以合作的樣子以形成能交換他們的估計(jì)而且本土化他們自己. 注意我們不承擔(dān)任何類型的固有感受器數(shù)據(jù) , 像是機(jī)械手的來自任何的不活潑的感應(yīng)器的速度和加速.
2 多機(jī)械手的區(qū)域化
形成
為了要考慮一隊(duì)機(jī)械手是否能被區(qū)域化,如果這數(shù)據(jù)是適當(dāng)?shù)?,融化來自不同的感?yīng)器的可得的數(shù)據(jù)而且查證是必需的. 對(duì)于 SE(2) 的一隊(duì) n 機(jī)械手, 局限是表示機(jī)械手位置和定方位的特色的 3n 坐標(biāo)的決心. 因此,見到是必需的如果 3n 獨(dú)立的測(cè)量是可得的. 因?yàn)槊總€(gè)測(cè)量在 3n 坐標(biāo)上敘述一個(gè)限制, 我們?yōu)樗械南拗瓢l(fā)展了一個(gè)功能獨(dú)立的測(cè)試. 因此, 我們定義等級(jí)將會(huì)允許的一個(gè)限制點(diǎn)陣式
我們查證隊(duì)是否能被本土化. 對(duì)于每范圍和舉止測(cè)量, 在框架 Bi 的在坐標(biāo)上的限制有被: 一雙生測(cè)量, jk φ和 ik φ ,包括機(jī)械手 Rj 和 Rk, 造成下列的類型 3個(gè)限制.
最后, 生測(cè)量, ij φ和φ kj,包括三機(jī)械手 Ri , Rj 和 Rk, 的任何雙造成下列的類型 4個(gè)限制. 這些限制能以形式被寫: 在 L1 是一個(gè)測(cè)量的線組合的地方, 和 h 在一些身體修理的叁考框架中是形狀變數(shù)的一個(gè)非線性功能. 只有能用來描述網(wǎng)絡(luò)的限制的四類型. 能被寫的所有的其他相等在上述的限制相等上功能依賴.
藉由區(qū)別四個(gè)限制相等, 我們使描述機(jī)械手坐標(biāo)的可允許小變化 (相等地速度) 的表達(dá). 在~之后這一個(gè)程序?yàn)?M 可能的限制給一個(gè) M × 3n 點(diǎn)陣式作為叁考框架 Bi:
如果 n 機(jī)械手的 3n能在一個(gè)不活潑的框架被估計(jì),在 SE(2) 的 n 機(jī)械手的定義 1 一隊(duì)被說是能地方化的.
評(píng)論顯然地被講到在系統(tǒng)理論中的可觀察性[14]- 如果一個(gè)隊(duì)是能地方化的超過任何的時(shí)間間隔, 系統(tǒng)完全觀察得出. 然而, 我們將會(huì)在一個(gè)即時(shí)又靜態(tài)的設(shè)定中使用定義 1, 而且如此克制不要使用系統(tǒng)理論上的記號(hào)法.
評(píng)論 2 我們也能需要要在設(shè)定它是唯一的必需品估計(jì) 3n 的地方的親戚被本土化的隊(duì)。n 的 3 個(gè)坐標(biāo)。在一個(gè)身體叁考框架中的個(gè)機(jī)械手.
在 SE(2) 的 n 機(jī)械手的定理 1 一形成是能地方化的只有當(dāng)如果 N=3 n。2ng.nb.nr ≤ 0 ( 11 ng , nb 和 nr 是被不活潑的或全球定位感應(yīng)器做的測(cè)量數(shù)量,生的)
感應(yīng)器和范圍感應(yīng)器分別地查證是容易的證明被任何全球的定位感應(yīng)器做的每個(gè)絕對(duì)的位置測(cè)量能直接地用來估計(jì)二州變數(shù) , 和每舉止和范圍測(cè)量將會(huì)至少增加在結(jié)構(gòu)或形成的形狀方面的一個(gè)限制. 因此, ng ,全球的位置感應(yīng)器, nb, 生感應(yīng)器, 和 nr, 范圍感應(yīng)器最多將會(huì)提供 2ng+nb+ nr 獨(dú)立的測(cè)量. 自從 3n 州變數(shù)之后必須被估計(jì),2ng+nb+ nr 一定至少相等 3n. 用有限制的測(cè)知能力提供機(jī)械手的形成, 定理 1 提供一種簡(jiǎn)單的必需品情況沒有考慮形成幾何學(xué)容易地查證. 注意像劃時(shí)代的感應(yīng)器,圓規(guī)和不活潑的測(cè)量單位 (IMUs) 這樣的另外感應(yīng)器,能以筆直的方式被與這一個(gè)結(jié)構(gòu)合并.
圖. 4 在 SE 為一群三個(gè)移動(dòng)的機(jī)械手抽取樣品舉止測(cè)知曲線圖(2). 三角法地, 曲線圖 (b) 總是能轉(zhuǎn)換成基于感知外界刺激在計(jì)算的網(wǎng)絡(luò)得到的知覺數(shù)據(jù).
4 局限方式
我們的局限方式承擔(dān)每個(gè)機(jī)械手為溝通和測(cè)知讓一個(gè)獨(dú)特的確認(rèn) (身份證) 兩者. 起先 , 我們也承擔(dān)物體角落有清楚的測(cè)知身份證. 如果基于式樣分類的簡(jiǎn)單啟發(fā)用來解決聯(lián)合機(jī)械手測(cè)量的問題,這一項(xiàng)假定能被放松. 我們的方式是集中的在某種意義上每個(gè)機(jī)械手收集來自其他的機(jī)械手的測(cè)知數(shù)據(jù)而且聯(lián)合使用的這數(shù)據(jù)它自己的,則只有曾經(jīng)叁觀所有節(jié)樹藉由在更深入地去之前在相同的深度拜訪所有的節(jié). 在這里,因?yàn)槲覀儧]有在考慮樹, 節(jié)能被拜訪超過一次. 因此, 如果有曲線圖的根和一個(gè)特定的節(jié)之間的超過一條路徑, 這些路徑將會(huì)被用. 在第一個(gè),一個(gè)頂點(diǎn)被拜訪, 它的位置被估計(jì). 從然后在,之上每次一個(gè)節(jié)被到達(dá), 它的先前估計(jì)姿勢(shì)被和使用這條新的路徑的最近被估計(jì)的姿勢(shì)結(jié)合. 因?yàn)橐粋€(gè)節(jié)被允許被拜訪超過一次,理論上, 運(yùn)算法則可以在環(huán)中進(jìn)入. 環(huán)引起互相依賴的情形哪里, 舉例來說, vi 的姿勢(shì)能被計(jì)算使用來自 vj 的數(shù)據(jù)和 vj 的姿勢(shì)能被計(jì)算使用來自 vi 的數(shù)據(jù). 為了避免這一個(gè)問題, 最初的曲線圖被轉(zhuǎn)換成一個(gè)直接的曲線圖哪里環(huán)被移動(dòng). 新的曲線圖以身為被選擇如起源的機(jī)械手的根與一棵樹 (然而, 它不是一棵樹) 類似. 環(huán)被藉由在有著相同的深度的機(jī)械手之間除去邊緣避免. 因?yàn)槲矬w沒有測(cè)量, 他們從不不再產(chǎn)生環(huán)和他們的優(yōu)勢(shì)正在劃除. 一樣的在二個(gè)機(jī)械手之間以單向邊緣發(fā)生. 在不同深度的二個(gè)機(jī)械手之間的雙向性邊緣也可能產(chǎn)生環(huán). 因?yàn)檫\(yùn)算法則視曲線圖為一棵樹而且從不向根移動(dòng),所以這些情形被避免. 例外與總是被獨(dú)立地方向跟隨的單向邊緣一起做. 對(duì)于情形的圖 5 表演被源自的曲線圖的一個(gè)例子和運(yùn)算法則的四個(gè)步驟呈現(xiàn).這實(shí)際上是 4 R's 和 R' s 的組合當(dāng)?shù)氐臏y(cè)量. 因?yàn)?R4 直接地不能夠本土化 R1, 一條經(jīng)過 R3 的間接路徑被需要. 和早先的運(yùn)算法則的議題之一是一些邊緣 (像是在早先的例子 e 12 和 e 21) 不被用于機(jī)械手的和物體的姿勢(shì)判斷. 為了避免浪費(fèi)有用的數(shù)據(jù),一可以在假定機(jī)械手的定方位的二個(gè)部份中分開運(yùn)算法則和位置能分開地被計(jì)算. 二個(gè)部份是: 1) 使用相同的運(yùn)算法則的機(jī)械手的定方位的判斷; 而且 2) 機(jī)械手的判斷和使用線性的目標(biāo)的位置重量了承擔(dān)在一個(gè)第三個(gè)機(jī)械手 (Rk) 同等的人物框架的二個(gè)機(jī)械手 (Ri 和 Rj) 的位置線地被講的最少的正方形方法被: ki θ和 ij φ被假定被知道的地方. 博學(xué) ki θ的需要在二個(gè)部份中解釋運(yùn)算法則的區(qū)分. 除計(jì)算定方位之外,運(yùn)算法則的第一部份負(fù)責(zé)計(jì)算被連接到計(jì)算的網(wǎng)絡(luò)的機(jī)械手的數(shù)字和, 結(jié)果, 為定義要計(jì)算的變數(shù). 一些邊緣仍然在運(yùn)算法則的這一個(gè)部份被浪費(fèi), 什么在觀察之下是合理的舉止測(cè)量容易比范圍一些好很多. 另外的進(jìn)步能被實(shí)行使用一個(gè)動(dòng)態(tài)的過濾器估計(jì)物體的位置. 因?yàn)槲矬w是硬的,如果和它的角落有關(guān)聯(lián)的一個(gè)模型被用,即使當(dāng)一些角落不能夠被機(jī)械手見到,追蹤能被運(yùn)行. 一簡(jiǎn)單的不連續(xù)的樣板物體的尺寸必然地不被知道的地方是 , vxj 和 vyj 是 j O 的速度成份的地方 , dj 和 j θ是 j O 和 j 之間的邊緣的大小和定方位+ O, T 是樣品時(shí)間, 和ω是物體有角的速度. 這一個(gè)模型考慮物體的速度成份是持續(xù)的. 因?yàn)樗豢偸钦鎸?shí),在過濾器計(jì)畫期間, 低的價(jià)值一定被指定給表現(xiàn)模型的三條最后線的信心水平的變數(shù). 因?yàn)橐粋€(gè)非線性模型被用, 一個(gè)廣大的過濾器 (EKF) 是必需的. 被用于這一個(gè)過濾器的測(cè)量將會(huì)是 x 和與起源機(jī)械手相關(guān)的每個(gè)物體角落的 y. 一經(jīng)機(jī)械手的姿勢(shì)被估計(jì), 在情緒商數(shù)的變形. (18) 而且 (19) 用來計(jì)算測(cè)量的矢量和它的共分散. EKF 在運(yùn)算法則中介紹另外的一個(gè)步驟. 因?yàn)闇y(cè)量的組合現(xiàn)在被過濾器運(yùn)行,所以因此,與物體角落相關(guān)的曲線圖的頂點(diǎn)一定從局限步驟被移動(dòng).
4.2 分配×集中
當(dāng)早先的集中方式的時(shí)候工作得很好為一相對(duì)地小群體的機(jī)械手,網(wǎng)絡(luò)議題, 如此的當(dāng)做交通和延遲, 和當(dāng)我們正在用數(shù)十或數(shù)以百計(jì)機(jī)械手考慮團(tuán)體的時(shí)候,在一個(gè)單身的機(jī)械手的缺乏計(jì)算資源能造成重要的問題. 在這些情況分配的運(yùn)算法則的使用變成強(qiáng)制性. 然后, 我們想要一個(gè)方法使用相同的運(yùn)算法則而且減少在藉由使分散那處理的部份被需要的計(jì)算和帶寬. 大體上, 可動(dòng)裝置機(jī)械手只需要當(dāng)?shù)氐臄?shù)據(jù)運(yùn)行一件工作. 因此,如果每個(gè)機(jī)械手地方性地收集來自它的立即鄰居的數(shù)據(jù)而且聯(lián)合這一筆數(shù)據(jù), 它有它需要大部份的時(shí)間的數(shù)據(jù). 在那情況哪里一個(gè)機(jī)械手能在團(tuán)體中聽每個(gè)機(jī)械手的話 (i.e., 在它的溝通范圍里面的所有機(jī)械手是), 它可以接受所有的數(shù)據(jù)而且,舉例來說,只在來自它的特定的距離 (測(cè)量被深度在曲線圖中) 里面本土化機(jī)械手位于. 二者擇一地, 機(jī)械手也可能是更選擇的而且選擇只本土化能見到一個(gè)給定的物體或位置的機(jī)械手. 在這情況, 機(jī)械手將會(huì)跟隨只有一些令人想要的
在曲線圖上的路徑, 變更運(yùn)算法則小一點(diǎn)點(diǎn). 另外的一個(gè)方法使分散處理, 當(dāng)特別網(wǎng)絡(luò)被用,而且機(jī)械手不能夠直接地和彼此說話的時(shí)候,這可能是有用的, 將只本土化附近的機(jī)械手 (溝通系列的機(jī)械手). 在這一種情形中,當(dāng)給溝通信息和很多的帶寬路由器被保護(hù)的時(shí)候,機(jī)械手不需要工作. 如果全球的數(shù)據(jù)是必需的, 一個(gè)機(jī)械手能問一或者更多它的鄰居對(duì)于數(shù)據(jù) (不生的數(shù)據(jù)) 而且計(jì)算使用變形 (18) 的其他機(jī)械手 (或一個(gè)特定的機(jī)械手) 位置和 (19). 因?yàn)檎麄€(gè)的團(tuán)體是可得的,當(dāng)即時(shí)的估計(jì)不是一個(gè)限制的時(shí)候,這一個(gè)程序能回歸地被運(yùn)行直到全部州數(shù)據(jù).
4.3 全球的局限
迄今,一經(jīng)每個(gè)機(jī)械手在它自己的叁考框架中計(jì)算其余者的位置在一個(gè)身體叁考框架的唯一的比較局限被考慮. 當(dāng)在 W 的與一個(gè)固定的框架相關(guān)的測(cè)量可用來根機(jī)械手的時(shí)候, 簡(jiǎn)單的變形能用來把比較的估計(jì)轉(zhuǎn)變成全球的坐標(biāo).
5 實(shí)驗(yàn)的結(jié)果
為了示范早先的策略,我們?cè)诒谎b備全方向的照相機(jī)的一隊(duì)五個(gè)像汽車一樣的移動(dòng)機(jī)械手上呈現(xiàn)局限實(shí)驗(yàn)和 IEEE 802.11(見到無花果樹. 1) 被引導(dǎo)網(wǎng)絡(luò). 促進(jìn)視覺的處理, 每個(gè)機(jī)械手和物體的角落與為每個(gè)機(jī)械手提供一個(gè)獨(dú)特的感應(yīng)器確認(rèn)的一種特別的顏色一起作記號(hào). 和一部外部的計(jì)算機(jī)的一臺(tái)被校正的在頭上的照相機(jī)用來本土化環(huán)境的隊(duì). 這部外部的計(jì)算機(jī)也收集在溝
通網(wǎng)絡(luò)里面被廣播的感應(yīng)器數(shù)據(jù). 我們的全方向照相機(jī)的限制是他們的決議以物體的距離減少. 如此我們承擔(dān)范圍數(shù)據(jù)受制于為價(jià)值通常用對(duì)范圍的第四力量的不一致比例項(xiàng)分配了噪音比 1.5 m 小, 當(dāng)舉止閱讀蒙受了通常分配的持續(xù)不一致的噪音的時(shí)候. 范圍評(píng)價(jià)比 1.5 m 之前更大忽略. 圖 6 表演地面-事實(shí)的數(shù)據(jù)獲得了使用一臺(tái)被校正的在頭上的照相機(jī), 對(duì)于實(shí)驗(yàn)在哪一個(gè)一個(gè)機(jī)械手 (R1) 向被另外的一個(gè)機(jī)械手本土化的一個(gè)目標(biāo)移動(dòng) (R2). 這一個(gè)機(jī)械手需要透過被其他機(jī)械手廣播的數(shù)據(jù)本土化目標(biāo). 表現(xiàn)的視野的領(lǐng)域被猛擲的圓周表示 R1 能基本上同時(shí)地見到一或二個(gè)機(jī)械手. 圖 6 表演 R1's 運(yùn)動(dòng)在 R0' s 叁考框架的判斷
圖. 6 以被 R2 收集而且分享了過計(jì)算的網(wǎng)絡(luò)的數(shù)據(jù)為基礎(chǔ)的對(duì)于 O1 的 R1 移動(dòng). 點(diǎn)表現(xiàn) R1'被 R0 和真實(shí)軌道的連續(xù)線估計(jì)的 s 位置. 內(nèi)部的猛擲圓周表現(xiàn)機(jī)械手大小和向外的一些表現(xiàn)視野的照相機(jī)領(lǐng)域.
(哪一個(gè)與全球的叁考一致構(gòu)成) 和 R1's 真實(shí)的軌道. 圖 7 表演追蹤使用 EKF 的一個(gè)三角形的盒子的三個(gè)機(jī)械手的二個(gè)迅速射擊. R0's 叁考框架被顯示. 當(dāng) R0 只用來本土化另外二的時(shí)候,機(jī)械手 R1 和 R2 能夠見到盒子角落. 即使 R0 不能夠見到盒子角落, 它能夠追蹤使用來自它的隊(duì)友的數(shù)據(jù)的盒子.
權(quán)利的迅速射擊表示,當(dāng)一個(gè)機(jī)械手變盲目的時(shí)候,角落共分散增加但是
盒子仍然被追蹤.
如果來自一臺(tái)外部的校正照相機(jī)的數(shù)據(jù)被用,最后一個(gè)結(jié)果表示機(jī)械手如何全球性地被本土化. 來自外部的照相機(jī)的圖 8 表演二個(gè)圖像本土化環(huán)境的機(jī)械手.
加上一個(gè)中間的,在哪 R4 能被照相機(jī)見到,在表 1 被顯示,局限造成這二個(gè)結(jié)構(gòu)。 由于數(shù)據(jù)從其他的機(jī)械手, 不是被外部照相機(jī)見到的機(jī)械手能仍然被本土化.
6 結(jié)論
這一張紙?zhí)峁┮粋€(gè)策略給在 SE 仿制多機(jī)械手形成(2). 使用這圖標(biāo)模型,我們已經(jīng)為在來自分配的照相機(jī)測(cè)量的 SE(2) 決定移動(dòng)機(jī)械手的完全能地方化形成源自充份的和必需的情況。 這一張紙經(jīng)過統(tǒng)計(jì)的操作員和曲線圖搜索運(yùn)算法則在 SE(2) 的被分配的機(jī)械手-感應(yīng)器的網(wǎng)絡(luò)中也為局限和追蹤呈現(xiàn)一個(gè)簡(jiǎn)單的和有效率的方法。 給州判斷的被提議的方法在分配的網(wǎng)絡(luò)里面對(duì)感應(yīng)器或機(jī)械手的數(shù)字感到可攀登。 實(shí)驗(yàn)式地,我們的局限運(yùn)算法則已經(jīng)被實(shí)現(xiàn)和廣泛地在排范圍從物體處理到感應(yīng)器配置的一個(gè)多機(jī)械手工作的大多樣性用。
附錄Ⅱ
Robot-Sensor Networks
General Robotics Automation Sensing and Perception Laboratory,
University of Pennsylvania, Philadelphia, PA, USA
Abstract: This paper addresses the problem of real-time position and orientation estimation of networked mobile robots in two-dimensional Euclidean space with simultaneous tracking of a rigid unknown object based on exteroceptive sensory information extracted from distributed vision systems. The sufficient and necessary conditions for team localization are proposed. A localization and object tracking approach based on statistical operators and graph searching algorithms is presented for a team of robots localized with heterogeneous sensors. The approach was implemented in an experimental platform consisting of car-like mobile robots equipped with omnidirectional video cameras and IEEE 802.11b wireless networking. The experimental results validate the approach.
Key words: cooperative localization; multi-robot formation; distributed sensor network; sensor data fusion
Introduction
In order for a team of mobile robots to navigate autonomously in some desired formations and further perform cooperative tasks, such as surveillance and target acquisition, they must be able to localize themselves in the formation as well as in a global reference frame [1,2]. Therefore, how to estimate robots’ positions and orientations (poses) in a precise and efficient way is of particular interest. Our interest in this paper is localizing a team of heterogeneous robots in two dimensional special Euclidean space SE (2) and localizing targets with information obtained from heterogeneous sensors. Specifically, we are interested in conditions for which all robots in the formation can be localized in the environ- ment. Our localization approach is closely related to those presented by Stroupe etal. in the sense that the robots have access to their teammates sensor data (or some related information) and combinethis information with that coming from its own sensors. In that paper, the robots use distributed sensing to improve self localization or target localization [1]. The methodologies of both papers of Kalman filters have presented solutions for the relative multi-robot localization problem by combining information exchanged by the robots using least squares optimization. Recent literatures use graphs to model sensor networks and cooperative control schemes [5, 6]. Results on graph rigidity theory [7-9] can be directly applied to multi-robot systems in R2 (Refs. [10, 11]). However, relatively little attention has been paid to networks with bearing observations, which is particularly important for networks of cameras. This paper presents a different approach for team localization and object tracking in SE(2) based on statistical operators and simple graph searching algorithms. Furthermore, different from the previous approaches, we formulate the problem in such a way that the problem of team localization and object tracking can be solved by the same algorithm. We also show how the advantages related to optimality of previous works can be easily incorporated in our method. As an example, this paper shows how to incorporate an extended Kalman filter (EKF) to improve object tracking. In the end, to validate our method, we show experimental results obtained with our Clod Buster platform consisting of a group of five car-like autonomous robots (see Fig.1) equipped with omnidirectional video cameras and an IEEE 802.11b wireless network..
We assume that each robot has an omnidirectional transmitter and receiver and each robot can listen to
every other robot. Thus, all robots in the formation can exchange their estimates and localize themselves in a cooperative manner. Note that we do not assume any kind of proprioceptive information such as the robot’s velocity and acceleration from any inertial sensors.
2 Localizability of Multi-Robot
Formations
To consider whether a team of robots can be localized, it is necessary to fuse the information available from different sensors and verify if this information is adequate. For a team of n robots in SE(2), localization is the determination of the 3n coordinates that characterize the robot positions and orientations. Thus, it is necessary to see if 3n independent measurements are available. Since every measurement specifies a constraint on the 3n coordinates, we developed a test of functional independence for all constraints. Accordingly, we define a constraint matrix whose rank will allow
us to verify whether the team can be localized. For each range and bearing measurement, the constraints on the coordinates in frame Bi are given by: A pair of bearing measurements, jk φ and ik φ , involving robots Rj and Rk, results in the following Type 3 constraint.
Finally, any pair of bearing measurements, ij φ and φkj, involving three robots Ri, Rj, and Rk, results in the following Type 4 constraint. All these constraints can be written in the form: where L1 is a linear combination of measurements, and h is a nonlinear function of the shape variables in some body-fixed reference frame. There are only four types of constraints that can be used to describe the network. All other equations that can be written are functionally dependent on the above constraint equations.
By differentiating the four constraint equations, we get expressions describing allowable small changes (equivalently velocities) of the robot coordinates. Following this procedure for M possible constraints gives an M×3n matrix for reference frame Bi:
Definition 1 A team of n robots in SE(2) is said to be localizable if the 3n coordinats of the n robots can be estimated in an inertial frame.
Remark 1 Localizability is obviously related to observability in system theory[14] — if a team is localizable over any time interval, the system is completely observable. However, we will use Definition 1 in an instantaneous, static setting, and thus refrain from using system theoretical notation.
Remark 2 We can also require the team to be localized in a relative setting where it is only necessary to estimate 3n.3 coordinates of n.1 robots in a body reference frame.
Theorem 1 A formation of n robots in SE(2) is localizable only if N=3n.2ng.nb.nr ≤ 0 (11) where ng, nb, and nr are numbers of measurements made by inertial or global positioning sensors, bearing sensors, and range sensors, respectively. Proof It is easy to verify that each absolute position measurement made by any global positioning sensor can be directly used to estimate two state variables, and each bearing and range measurement will add at least one constraint on the configuration or shape of the formation. Thus, ng, global position sensors, nb, bearing sensors, and nr, range sensors will provide at most 2ng+nb+nr independent measurements. Since 3n state variables have to be estimated, 2ng+nb+nr must be at least equal to 3n. Given a formation of robots with limited sensing capability, Theorem 1 provides a simple necessary condition to easily verify the localizability without considering the formation geometry. Note that additional sensors, such as landmark sensors, compasses, and inertial measurement units (IMUs), can be incorporated into this framework in a straightforward way.
Fig. 4 Sample bearing sensing graphs for a group of three mobile robots in SE(2). Trigonometrically, graph (b) can always transform into (a) based on exteroceptive sensory information available in the computational network.
4 Localization Approach
Our localization approach assumes that each robot has a unique identification (ID) both for communication and sensing. At first, we also assume that object corners have distinct sensing IDs. This assumption can be relaxed if simple heuristics based on pattern classification are used to solve the problem of associating robots measurements. Our approach is centralized in the sense that each robot collects sensing information from other robots and combines this information using its own computational resources via the computational network. Thus, given the previous background, localization in a network of mobile robots can be addressed by combining a series of operations that involve transformations and combinations. This algorithm visits, only once, all nodes of a tree by visiting all the nodes at the same depth before going deeper. Here, since we are not considering trees, the nodes can be visited more than once. Thus, if there is more than one path between the root of the graph and a specific node, all these paths will be used. At the first time a vertex is visited, its position is estimated. From then on, each time a node is reached, its previously estimated pose is combined with the pose recently estimated using this new path. Because a node is allowed to be visited more than once, theoretically, the algorithm could enter in loops. Loops cause situations of interdependence where, for example, the pose of vi can be computed using information from vj and the pose of vj can be computed using information from vi. To avoid this problem, the original graph is transformed into a direct graph where loops are removed. The new graph is similar to a tree (however, it is not a tree) with the root being the robot chosen as the origin. Loops are avoided by removing edges between robots with the same depth. Because objects do not have measurements, they never create loops and their edges are never deleted. The same occurs with unidirectional edges between two robots. A bidirectional edge between two robots of different depths may also create loops. These situations are avoided because the algorithm treats the graph as a tree and never moves towards the root. Exceptions are made with unidirectional edges that are always followed independently of the direction. Figure 5 shows an example of the derived graph and four steps of the algorithm for the situation presented.
This is actually the combination of 4 R' s and 1 R' s local measurements. Because R4 cannot localize R1 directly, an indirect path going through R3 is needed. One of the issues with the previous algorithm is that some of the edges (such as e12 and e21 in the previous example) are not used in the robots’ and object’s pose estimation. To avoid wasting useful information, one could divide the algorithm in two parts assuming the robots’ orientations and
positions can be computed separately. The two parts are: 1) estimation of the robot’s orientations using the same algorithm; and 2) estimation of the robots’ and targets’ positions using a linear weighted least squares method that assumes that the positions of two robots (Ri and Rj) in a third robot (Rk) coordinate frame are linearly related by:
where ki θ and ij φ are assumed to be known. The necessity of knowing ki θ explains the division of the algorithm in two parts. Besides computing the orientations, the first part of the algorithm is responsible for computing the number of robots connected to the computational network and, consequently, for defining the variables to be computed. Some edges are still wasted in this part of the algorithm, what is reasonable under the observation that bearing measurements tend to be much better than the range ones. Another improvement can be carried out using a dynamic Kalman filter to estimate the object’s position. Because the object is rigid, if a model that relates its corners is used, tracking can be performed even when some corners cannot be seen by the robots. A simple discrete model where the dimensions of the object are not necessarily known is where vxj and vyj are the velocity components of j O , dj and j θ are the size and orientation of the edge between j O and 1 j+ O , T is the sample time, and ω is the object angular velocity. This model considers that the object’s velocity components are constant. Since it is not always true, during the filter project, low values must be assigned to the variables that represent the confidence level of the three last lines of the model. Because a nonlinear model is used, an extended Kalman filter (EKF) is necessary. The measurements used in this filter will be x and y of each object corner relative to the origin robot. Once the robots’ poses are estimated, the transformation in Eqs. (18) and (19) are used to compute the vector of measurements and its covariance. The EKF introduces another step in the algorithm. Thus, the vertices of the graph relative to the object corners must be removed from the localization steps because the measurements’ combinations are now performed by the filter.
4.2 Centralized × Distributed
While the previous centralized approach works very well for a relatively small group of robots, network issues, such as traffic and delays, and the lack of computational resources in one single robot can pose significant problems when we are considering groups with tens or hundreds of robots. In these cases the use of distributed algorithms becomes mandatory. Then, we want a way to use the same algorithm and reduce both the computation and bandwidth needed by decentralizing part of the processing. In general, mobile robots only need local information to perform a task. Thus, if each robot collects information from its immediate neighbors and combines this data locally, it has information that it needs most of the time. In the case where a robot can listen to every robot in the group (i.e., all robots are within its communication range), it could receive all the data and, for example, localize only the robots located within a certain distance (measured by the depth in the graph) from it. Alternatively, the robot could also be more selective and
收藏