Dell M1000e - Dell M1000e

The Dell pichoqli server mahsulotlar ularning atrofida qurilgan M1000e o'zlarining server pichoqlarini ushlab turadigan ilova, o'rnatilgan EqualLogic iSCSI saqlash maydoni tarmog'i va I / U modullari, shu jumladan Ethernet, Elyaf kanali va InfiniBand kalitlar.

G12 server pichoqlarini tanlash bilan M1000e muhofazasi

Ilova

M1000e mos keladi a 19 dyuymli tayanch va 10 ga teng raf birliklari balandligi (44 sm), kengligi 17,6 "(44,7 sm) va chuqurligi 29,7" (75,4 sm). Bo'sh pichoq idishni og'irligi 44,5 kg ni tashkil qiladi, to'liq yuklangan tizim esa 178,8 kg gacha vaznga ega bo'lishi mumkin.[1]

Old qismiga serverlar, quvvat manbalarining orqa tomoniga esa ventilyatorlar va I / U modullari boshqaruv modullari (lar) (CMC yoki shassi boshqaruvini boshqarish moslamasi) va KVM tugmasi. Pichoq muhofazasi pichoq tizimining serverlari va I / U tizimlari uchun markazlashtirilgan boshqaruvni taklif qiladi. Pichoq tizimida ishlatiladigan ko'pgina serverlar iDRAC-kartani taklif qiladi va har bir iDRAC-serverga M1000e boshqaruv tizimi orqali ulanishi mumkin. Har bir o'rnatilgan serverning asosiy konsoliga kirish uchun virtual KVM kalitini ulash ham mumkin.

2013 yil iyun oyida Dell PowerEdge VRTX, bu M1000e bilan modullarni almashadigan kichikroq pichoq tizimi. Pichoq serverlari, garchi an'anaviy nomlash strategiyasiga rioya qilsalar ham. M520, M620 (faqat pichoqlar qo'llab-quvvatlanadi) VRTX va M1000e o'rtasida almashtirilmaydi. Pichoqlar proshivka va mezzanine ulagichlarida farq qiladi.[iqtibos kerak ]

2018 yilda Dell kompaniyasi Dell PE MX7000, yangi MX shkafi modeli, keyingi avlod Dell shkaflari.

M1000e muhofazasi old va orqa tomonlarga ega, shuning uchun kiritilgan pichoqlar va modullar orasidagi barcha aloqa o'rta darajadagi samolyot orqali amalga oshiriladi, u xuddi shu funktsiyaga ega. orqa samolyot lekin ikkala tomonning ulagichlari mavjud, bu erda old tomoni server pichoqlari uchun, orqa tomoni uchun mo'ljallangan I / O modullar.

O'rta samolyot

Qaysi yarim samolyot zavodga o'rnatilganligini ko'rish uchun shassining orqa tomonida ko'rsatma

O'rta samolyot butunlay passiv. Server pichoqlari muhofazaning old tomoniga joylashtirilgan, qolgan barcha qismlarga esa orqadan o'tish mumkin.[2]
Original midplane 1.0 qobiliyatlari A mato - Ethernet 1Gb; B&C matolari - Ethernet 1Gb, 10Gb, 40Gb - 4Gb tolali kanal, 8Gb - IfiniBand DDR, QDR, FDR10. Kengaytirilgan midplane 1.1 imkoniyatlari A mato - Ethernet 1Gb, 10Gb; B&C matolari - Ethernet 1Gb, 10Gb, 40Gb - Fiber kanal 4Gb, 8Gb, 16Gb - IfiniBand DDR, QDR, FDR10, FDR. Asl M1000e korpuslari midplane 1.0 versiyasi bilan birga kelgan, ammo o'rta samolyot ularni qo'llab-quvvatlamagan 10GBASE-KR A mato bo'yicha standart (10GBASE-KR standart B va C matolarida qo'llab-quvvatlanadi). A yoki 16Gb matolarda 10Gb chekilgan bo'lishi Elyaf kanali yoki InfiniBand B&C matolariga FDR (va undan ham tezroq) kerak, o'rta samolyot 1.1. Ilovaning hozirgi versiyalari o'rta samolyot 1.1 bilan birga keladi va o'rta samolyotni yangilash mumkin. Ilovaning orqa tomonidagi belgilar orqali I / U modullarining yuqorisida: agar 6 "I / O uyasi ustida" o'q pastga "ko'rinsa, fabrikada 1,0 o'rta samolyot o'rnatildi; Agar gorizontal chiziqlar 3 yoki 4 bo'lsa, o'rta samolyot 1.1 o'rnatildi, o'rta samolyotni yangilash mumkin bo'lganligi sababli tashqi belgilar hal qiluvchi ahamiyatga ega emas: CMC boshqaruv interfeysi orqali o'rta samolyotning haqiqiy o'rnatilgan versiyasi ko'rinadi[3]

Old: Blade serverlari

Har bir M1000e shkafi 32 chorak balandlikda, 16 yarim balandlikda pichoqni yoki 8 to'liq balandlikda yoki kombinatsiyani (masalan, 1 to'liq balandlikda + 14 yarim balandlikda) saqlashi mumkin. Uyalar 1-16 raqamlangan, bu erda 1-8 yuqori pichoqlar va 9-16 to'g'ridan-to'g'ri 1-8 ostidadir. To'liq balandlikdagi pichoqlardan foydalanganda bitta n (1 = 8 dan 8 gacha) va n + 8 teshiklardan foydalaning, old tomonning pastki qismida o'rnatilgan bo'lib, 2 x uchun ulanish imkoniyati mavjud USB, sichqoncha va klaviatura uchun mo'ljallangan, shuningdek standart VGA monitor ulanishi (15 pin). Buning yonida quvvat ko'rsatkichi bo'lgan quvvat tugmasi mavjud.

Buning yonida kichkina LCD Korpusning CMC / boshqaruv tizimiga kirmasdan tizim ma'lumotlarini olish imkonini beruvchi navigatsiya tugmachalari bilan ekran. Ushbu displey orqali asosiy holat va konfiguratsiya ma'lumotlari mavjud. Displeyni boshqarish uchun uni bitta tomonga tortib, optimal ko'rinish va navigatsiya tugmachasiga kirish uchun burish mumkin. Vaziyatni tezkor tekshirish uchun indikator nuri LCD displey yonida o'tiradi va har doim ko'rinib turadi, ko'k chiroq normal ishlashini va to'q sariq rangli LED qandaydir muammoga ishora qiladi.

Ushbu LCD displey operatorga CMC IP-manzilini sozlash imkoniyatini beradigan yangi etkazib berilgan (konfiguratsiyalanmagan) tizimdagi dastlabki konfiguratsiya ustasi uchun ham ishlatilishi mumkin.[2]

Orqaga: quvvat, boshqaruv va I / O

Boshqa barcha qismlar va modullar M1000e orqa tomonida joylashgan. Orqa tomon 3 qismga bo'lingan: yuqori: bu erda bitta uchta boshqaruv moduli joylashtirilgan: bitta yoki ikkita CMC moduli va ixtiyoriy iKVM Modulning pastki qismida quvvat manbai uchun 6 ta joy mavjud. Standart M1000e uchta PSU bilan ishlaydi Ularning orasidagi maydon sovutish foniylari (chap - o'rta - o'ng) va 6 ga qadar I / U modullari uchun 3 x 3 maydonchalarni taklif qiladi: o'rta fanatlar chap tomonida uchta modul va uchta to'g'ri. Chapdagi I / U modullari A1, B1 va C1 raqamlangan I / U modullari, o'ng tomonda A2, B2 va C2 ​​uchun joylar mavjud. A mato I / U modullari bortdagi I / U tekshirgichlariga ulanadi, aksariyat hollarda er-xotin 1Gb yoki 10Gb Ethernet NIC bo'ladi. Pichoqda 1Gb NIC ikkita port mavjud bo'lganda, birinchi NIC A1 matodagi I / U moduliga ulanadi va 2-NIC A2 matoga ulanadi (va pichoq uyasi ichki chekilgan interfeysga mos keladi: masalan, 5-uyadagi NIC borti A1 matoning 5 interfeysiga ulanadi va NIC-ning ikkinchi borti A2 matosining 5 interfeysiga o'tadi)

B1 / B2 matoidagi I / U modullari serverdagi (ixtiyoriy) B yoki 2 Mezzanine kartasiga, C mato esa Mezzanine C yoki 3 ga ulanadi.

Barcha modullarni ishlaydigan idishga kiritish yoki olib tashlash mumkin (Issiq almashtirish )[2]

Mavjud server pichoqlari

M1000e 32 chorak balandlikgacha, 16 yarim balandlikdagi pichoqni yoki 8 ta to'liq balandlikdagi pichoqni yoki ularning aralashmasini (masalan, 2 to'liq balandlik + 12 yarim balandlik) ushlab turadi. 1/4 balandlikdagi pichoqlarni o'rnatish uchun to'liq o'lchamdagi yeng kerak. Hozirgi ro'yxat hozirda mavjud 11G pichoqlar va eng so'nggi avlod 12 modellar. M605, M805 va M905 seriyali kabi eski pichoqlar ham mavjud.

Power Edge M420

2012 yilda chiqarilgan,[4] PE M420 - bu "chorak o'lchovli" pichoq: bu erda ko'p serverlar "yarim o'lchovli" bo'lib, M1000e korpusiga 16 tig 'berishga imkon beradi, yangi M420 bilan 32 tadan pichoqli serverlar bitta shassisga o'rnatilishi mumkin. M420-ni amalga oshirish tizim uchun ba'zi oqibatlarga olib keladi: ko'p odamlar pichoqdagi iDRAC boshqaruv kartasi uchun "avtomatik IP-manzil tayinlash" ni qo'llab-quvvatlash uchun har bir shassiga 16 ta IP-manzilni ajratib qo'yishgan, ammo hozirda har bir shassiga 32 tadan pichoqni ishlatish mumkin. iDRAC uchun o'zlarining IP-tayinlashlarini o'zgartirish kerak bo'lishi mumkin. M420 serverini qo'llab-quvvatlash uchun CMC firmware 4.1 yoki undan keyingi versiyasini ishga tushirish kerak[5] va to'rtta M420 pichoqni ushlab turadigan to'liq o'lchamdagi "yeng" kerak. Bundan tashqari, u "normal" I / O NIC tayinlanishiga olib keladi: ko'p (yarim o'lchamdagi) pichoqlarda ikkita LOM (LAN On Motherboard) mavjud: biri A1 matodagi kalitga, ikkinchisi A2 matoga ulanadi. Va xuddi shu narsa Mezzanine B va C kartalariga ham tegishli. Barcha mavjud I / U modullari (bundan mustasno PCM6348, MXL va MIOA ) 16 ta ichki portga ega: har bir yarim o'lchamdagi pichoq uchun bitta. M420 ikkita 10 Gb LOM NIC-ga ega bo'lganligi sababli, to'liq yuklangan shassi LOM uchun 2 × 32 ichki kalit portlarini va Mezzanine uchun xuddi shunday talab qiladi. M420-server faqat bitta Mezzanine kartasini qo'llab-quvvatlaydi (ularning joylashgan joyiga qarab Mezzanine B yoki Mezzanine C), yarim balandlik va to'liq balandlikdagi tizimlar ikkita Mezzanine kartasini qo'llab-quvvatlaydi, bortdagi barcha NIC-larni qo'llab-quvvatlash uchun 32-kartani o'rnatish kerak bo'ladi. MXL yoki Force10 I / U Aggregator kabi chekilgan kalit. Ammo Mezzanine kartasi uchun bu boshqacha: Mezzanine B-dan PE M420 ga ulanishlar M1000e ning B va C matolari o'rtasida "yukni muvozanatlashadi": Mezzanine kartasi "uyasi" (yengdagi yuqori uyasi) mato "S" ga ulanadi, "uyasi B" (yuqoridan ikkinchi tirqish) B matoga ulanadi va bu keyinchalik yengdagi C va D teshiklari uchun takrorlanadi.[4]

Power Edge M520

2x8 yadroli Intel Xeon E5-2400 protsessoriga ega, Intel C600 chipsetini boshqaradigan va 12 DIMM uyasi orqali 384 Gb gacha RAM xotirasini taqdim etadigan yarim balandlikdagi server. Ikkita pichoqli disk (2,5 dyuymli PCIe SSD, SATA HDD yoki SAS HDD) mahalliy saqlash uchun o'rnatilishi mumkin va Intel / Broadcom LOM + 2 Mezzanine uyasi tanlash uchun.[6] M520-dan ham foydalanish mumkin PowerEdge VRTX tizim.

Power Edge M610

64 Gbaytgacha RAM uchun to'rt yadroli Intel Xeon va 8 DIMM uyasi bo'lgan yarim balandlikdagi server

Power Edge M610

To'rt yadroli yoki olti yadroli yarim balandlikdagi server Intel 5500 yoki 5600 Xeon CPU va Intel 5520 chipseti. 192 Gb RAM DDR3 gacha bo'lgan 12 DIMM uyasi orqali operativ xotira parametrlari. Plastinkada maksimal ikki dona 2,5 dyuymli qattiq disk yoki SSD-lar va Ethernet uchun o'rnatilgan NIC-larni tanlash yoki birlashtirilgan tarmoq adapteri (CNA), Fiber Channel yoki InfiniBand. Serverda Intel 5520 chipseti va Matrox G200 videokartasi mavjud[7]

Power Edge M610x

Yarim balandlikdagi M610 bilan bir xil imkoniyatlarga ega, lekin ikkita standart uzunlik / to'liq balandlikdagi PCIe kartalarini qo'llab-quvvatlaydigan x16 PCI Express (PCIe) 2.0 kengaytiruvchi uyalarini o'z ichiga olgan kengaytiruvchi modulni taklif qiladigan to'liq balandlikdagi pichoqli server.[8]

Power Edge M620

2x12 yadroli Intel Xeon E5-2600 v2 protsessorlari, Intel C600 chipsetini boshqaradigan va 24 DIMM uyasi orqali 768 Gbaytgacha RAM xotirasini taqdim etadigan yarim balandlikdagi server. Ikkita pichoqli disk (2,5 "PCIe SSD, SATA HDD yoki SAS HDD) bir qator RAID tekshiruvi parametrlari bilan mahalliy saqlash uchun o'rnatiladi. Ikkala tashqi va bitta ichki USB port va ikkita SD karta uyasi. Pichoqlar oldindan kelishi mumkin Windows 2008 R2 SP1, Windows 2012 R2, SuSE Linux Enterprise yoki RHEL bilan o'rnatiladi, shuningdek Citrix XenServer yoki VMWare vSphere ESXi yoki W2K8 R2 bilan ta'minlangan Hyper-V yordamida buyurtma berish mumkin.[9]Sotuvchiga ko'ra, barcha Generation 12 serverlari virtualizatsiya platformasi sifatida ishlashga moslashtirilgan.[10] Banddan tashqarida boshqarish orqali amalga oshiriladi iDRAC 7 CMC orqali.

Power Edge M630

Intel C610 chipsetini ishlaydigan va 24 DIMM uyasi orqali 768 Gbaytgacha RAM xotirasini yoki 20 DIMM uyasi orqali 640 GB RAM xotirasini taqdim etadigan, 2x22 yadroli Intel Xeon E5-2600 v3 / v4 protsessorlari bo'lgan yarim balandlikdagi server. 145w protsessorlardan foydalanish. Ikkita pichoqli disk (2,5 "PCIe SSD, SATA HDD yoki SAS HDD) mahalliy saqlash uchun o'rnatilishi mumkin va Intel / Broadcom LOM + 2 Mezzanine uyasi tanlash uchun.[6] M630-dan foydalanish mumkin PowerEdge VRTX tizim. Amulet HotKey GPU yoki Teradici PCoIP Mezzanine moduli bilan jihozlangan o'zgartirilgan M630 serverini taqdim etadi.

Power Edge M640

2 yadroli 28 yadroli Xeon Scalable protsessoriga ega yarim balandlikdagi server. Ikkala M1000e va PowerEdge VRTX shassi. Server 1024 Gbaytgacha bo'lgan operativ xotira uchun 16 ta DDR4 RDIMM xotira uyasini va SAS / SATA yoki NVMe disklarini qo'llab-quvvatlaydigan 2 ta haydovchi joyini (adapter bilan) qo'llab-quvvatlashi mumkin. Serverda iDRAC 9 ishlatiladi.

Power Edge M710

To'rt yadroli yoki olti yadroli to'liq balandlikdagi server Intel 5500 yoki 5600 Xeon CPU va 192 Gb gacha bo'lgan RAM. Plitkada maksimal to'rtta, 2,5 dyuymli qattiq disk yoki SSD va Fiber Channel yoki InfiniBand uchun chekilgan yoki ulangan tarmoq adapteri uchun o'rnatilgan NIC-ni tanlash. Videokarta Matrox G200. Serverda Intel 5520 chipseti mavjud[11]

Power Edge M710HD

M710 ning ikkita soketli versiyasi, ammo endi yarim balandlikdagi pichoqda. Intel 5520 chipseti bilan CPU to'rt yadroli yoki 6 yadroli Xeon 5500 yoki 5600 bo'lishi mumkin. 288 Gb DDR3 RAMgacha bo'lgan 18 DIMM uyasi orqali ushbu pichoqni va Broadcom yoki Intel asosida o'rnatilgan chekilgan NIC-larning standart tanlovini va Ethernet, Fiber Channel yoki InfiniBand uchun bir yoki ikkita Mezzanine kartalarini o'rnatishi mumkin.[12]

Power Edge M820

4x 8 yadroli Intel Xeon E5-4600 protsessorli, Intel C600 chipsetini boshqaradigan va 48 DIMM uyalar orqali 1,5 TBgacha RAM xotirasini taqdim etadigan to'liq balandlikdagi server. Mahalliy xotira uchun to'rtta pichoqli 2,5 "SAS HDD / SSD yoki ikkita PCIe flesh SSD o'rnatilishi mumkin. M820 10 Gbit / s uchun 3 xil bortli konvergerli Ethernet adapterini taklif qiladi. Ethernet orqali tolali kanal Broadcom-dan (FCoE), Brokad yoki QLogic va Ethernet, Fiber Channel yoki InfiniBand I / O uchun ikkita qo'shimcha Mezzanine[13]

Power Edge M910

4-chi yadroli Intel XEON E7 protsessori yoki 4 x 8 yadroli XEON 7500 seriyali yoki 2 x 8 yadroli XEON 6500 seriyali, 512 Gb yoki 1 TB DDR3 RAM va ikkita almashtiriladigan 2 ta, 11-avlodning to'liq balandligi server, 5 dyuymli qattiq disklar (yigiruv yoki SSD). U Intel E 7510 chipsetidan foydalanadi. Ethernet, Fiber Channel yoki InfiniBand uchun o'rnatilgan NIC-larni tanlash.[14]

Power Edge M915

Shuningdek, AMD SR5670 va SP5100 chipsetiga ega AMD Opteron 6100 yoki 6200 seriyali protsessordan foydalanadigan to'liq balandlikdagi 11G-server. 512 Gb gacha bo'lgan RAMni taklif qiluvchi 32 DDR3 DIMM uyasi orqali xotira. Bortda ikkitagacha 2,5 dyuymli HDD yoki SSD. Pichoqda ikkita NIC-kartalar va ikkita portli 10Gb Ethernet, ikkita portli FCoE, ikkita portli 8Gb tolali kanal yoki ikkita Mellanox Infiniband porti uchun ikkita oraliq kartalar mavjud. Video 8MB xotiraga ega bo'lgan Matrox G200eW bortida[15]

Mezzanine kartalari

Har bir server anakartda Ethernet NIC-lar bilan birga keladi. Ushbu "bortdagi" NIC-lar A1-ga yoki kalitning orqa qismidagi A2-ga joylashtirilgan kalit yoki o'tish moduliga ulanadi. Ko'proq NIC yoki chekilgan bo'lmagan narsalarga ruxsat berish I / O har bir pichoq[16] ikkita deb nomlangan oraliq uyalar: B uyasi B1 va B2 kommutatorlari / modullariga ulanadigan va C1 / C2 ga ulangan C uyasi: M1000e shassisi 6 tagacha kalitlarga yoki o'tish modullariga ega. Ishdan bo'shatish uchun odatda kalitlarni juft-juft qilib o'rnatishi mumkin: A2 chuqurchasidagi kalit odatda A1 tugmasi bilan bir xil va ma'lumotlarga yoki saqlash tarmog'iga ulanish uchun anakartdagi NIC pichoqlarini ulaydi.

(Yig'ilgan) Ethernet Mezzanine kartalari

Standart pichoq-serverlarda "standart" kalit-ulagichga ( A mato) muhofazada (ko'pincha pichoq-serverlar pichoqning old qismida bir yoki bir nechta tashqi NIC interfeyslarni taklif qilishadi), lekin agar kimdir server ko'proq jismoniy (ichki) interfeyslarga ega bo'lishini xohlasa yoki mahfazadagi turli xil o'chirish pichoqlariga ulansa pichoqqa qo'shimcha oraliq kartalarni joylashtiring. Xuddi shu narsa a qo'shish uchun ham amal qiladi Elyaf kanali host bus adapteri yoki a Ethernet orqali tolali kanal (FCoE) birlashtirilgan tarmoq adapteri interfeys. Dell PowerEdge pichoqlari uchun quyidagi (yaqinlashtirilgan) chekilgan oraliq kartalarini taqdim etadi:[17]

  • Broadcom 57712 ikkita portli CNA
  • Brokad BR1741M-k CNA
  • Mellanoks ConnectX-2 er-xotin 10Gb karta
  • Intel ikkita port 10Gb chekilgan
  • Intel Quad port Gigabit Ethernet
  • Virtualizatsiya texnologiyasi va iSCSI tezlashtirish xususiyatlariga ega Intel Quad port Gigabit Ethernet
  • Broadcom NetXtreme II 5709 ikki va to'rtta portli Gigabit Ethernet (iSCSI tushirish xususiyatlariga ega ikkita port)
  • Broadcom NetXtreme II 5711 iSCSI yuk tushirish xususiyatlariga ega ikki tomonlama port 10Gb Ethernet

Ethernet bo'lmagan kartalar

Yuqoridagilardan tashqari quyidagi oraliq kartalar mavjud:[17]

  • Emulex LightPulse LPe1105-M4 Xost adapteri
  • Mellanox ConnectX IB MDI Dual-Port InfiniBand Mezzanine Card
  • QLogic SANblade HBA
  • SANsurfer Pro

Pichoqni saqlash

Ko'pgina sozlashlarda server pichoqlari tashqi xotiradan foydalanadi (NAS foydalanish iSCSI, FCoE yoki Elyaf kanali orqali har bir pichoqda lokal server xotirasi bilan birgalikda qattiq disk drayverlari yoki SSD-lar pichoqlarda (yoki hatto faqat boot-OS o'xshash SD-kartada VMware ESX[18]). Bundan tashqari, PXE yoki tashqi xotira orqali yuklanadigan to'liq disksiz pichoqlarni ishlatish mumkin, ammo lokal va yuklash xotirasidan qat'i nazar: pichoqlar ishlatadigan ma'lumotlarning aksariyati SAN yoki NAS tashqi pichoqdan tashqari.

EqualLogic Blade-SAN

Dell EqualLogic PS M4110 modellarini qo'ydi iSCSI saqlash massivlari[19] jismonan M1000e shassisiga o'rnatilgan: bu SAN bir-birining yonidagi yarim balandlikdagi ikkita pichoq bilan bir xil joyni egallaydi. Forma faktoridan tashqari (fizik kattaligi, yopiq tizimdan quvvat olish va hk) bu "normal" iSCSI SAN: shassidagi pichoqlar Ethernet orqali aloqa qiladi va tizim qabul qilingan chekilgan pichoqni almashtirishni talab qiladi orqada (yoki o'tish moduli + rack-switch): shassida va M4110-da server pichoqlari bilan to'g'ridan-to'g'ri aloqa qilish imkoniyati yo'q: bu foydalanuvchiga faqat to'liq mini-ma'lumot markazini bitta to'plamga to'plashga imkon beradi. to'siq (19 "raf, 10 RU )

PS M4110 modeli va ishlatilgan disk drayveriga qarab 4,5 TB (14 × 146 Gb, 15K SAS HDD bilan M4110XV) va 14 TB (14 x 1TB, 7,2K SAS HDD bilan M4110E) o'rtasida tizim (xom) saqlash hajmini taklif etadi. . M4110XS 9 HDD va 5 yordamida 7.4 TB taklif qiladi SSD-lar.[20]

Har bir M4110 iSCSI uchun bitta yoki ikkita tekshirgich va ikkita 10 gigabitli chekilgan interfeys bilan ta'minlangan. SAN-ni boshqarish shassi-boshqaruv interfeysi (CMC) orqali amalga oshiriladi. ISCSI 10 Gb interfeyslardan foydalanganligi sababli SAN 10G pichoqli kalitlardan biri bilan birgalikda ishlatilishi kerak: PCM 8024-k yoki Force10 MXL almashtirish.[20] Ilovaning o'rta tekislikdagi apparat versiyasi 10Gb KR ulanishini qo'llab-quvvatlash uchun kamida 1.1 versiyasi bo'lishi kerak[21][22]

PowerConnect kalitlari

Drawing of M1000e enclosure with 2 x FTOS MXL, 2 x M8024-k and 2x FibreChannel 5424

Ilovaning orqa tomonida quvvat manbalari, fan-tovoqlar, bitta yoki ikkita shassi boshqaruv modullari (CMC) va virtual mavjud. KVM tugmasi.Va orqa tomonda uchta juftlikda joylashgan A / / A2, B1 / B2 va C1 / C2 raqamli I / U modullari uchun 6 ta joy mavjud. The A ko'rfazlar anakartdagi NIC-larni tashqi tizimlarga ulaydi (va / yoki bitta to'siq ichidagi turli xil pichoqlar orasidagi aloqani ta'minlash).

The Dell PowerConnect kalitlar - Dell-da foydalanish uchun modulli kalitlar pichoqli server ilova M1000e.M6220, M6348, M8024 va M8024K - bir xil matolarga asoslangan bitta oiladagi kalitlar (Broadcom ) va bir xil dasturiy ta'minot versiyasini ishga tushirish.[23]

M seriyali kalitlarning barchasi OSI qatlami 3 qobiliyatli: shuning uchun ham ushbu qurilmalar deyish mumkin qatlam 2 O'rnatilgan Ethernet kalitlari yo'riqnoma yoki layer3 funktsionalligi.

M seriyali kalitlarning eng muhim farqi Dell PowerConnect klassik kalitlar (masalan 8024 modeli ) ko'pgina interfeyslarning mavjudligi haqiqatdir ichki to'siqning o'rta tekisligi orqali pichoq-serverlarga ulanadigan interfeyslar. Shuningdek, M seriyali korpusdan tashqarida ishlamaydi: u faqat korpusga o'rnatilganda ishlaydi.

PowerConnect M6220

Bu 20-portli kalit: 16 ta ichki va 4 ta tashqi Gigabitli chekilgan interfeyslar va uni to'rtta 10Gb tashqi interfeys bilan uzaytirish imkoniyati yoki ikkita 10Gb ulanish va ikkita stack port. suyakka bir nechta PCM6220-lar bitta katta mantiqiy kalitga.

PowerConnect M6348

Bu 48 portli kalit: 32 ta ichki 1Gb interfeys (har bir serverblade uchun ikkitadan) va 16 ta tashqi mis (RJ45) gigabit interfeyslar. Bundan tashqari, ikkita 10 Gb ulanish uchun ikkita SFP + uyasi va ikkita ikkita 10 Gb ulanish uchun ishlatilishi mumkin yoki bir nechta M6348 pichoqlarini bitta mantiqiy kalitga joylashtirish uchun foydalanish mumkin. M6348 har bir pichoqqa to'rttadan 1Gb interfeyslarni taklif qiladi, ya'ni A matoga 4 ta ichki NIC (= ichki / anakartdagi NIC) ni taqdim etadigan pichoqlardan foydalanilganda faqat to'liq quvvatga o'tish imkoniyatidan foydalanish mumkin. M6348 ni boshqa M6348 bilan birga, lekin PCT7000 seriyali tokchalar.

PowerConnect M8024 va M8024k

M8024 va M8024-k 16 yoki 1 Gb Gb interfeyslarni va bitta yoki ikkita I / U modullari orqali 8 ta tashqi portni taklif qiladi, ularning har biri quyidagilarni taklif qilishi mumkin: 4 × 10 Gb SFP + uyalar, 3 x CX4 10 Gb (faqat) mis yoki 2 x 10G BaseT 1/10 Gb RJ-45 interfeyslari. PCM8024 2011 yil noyabr oyidan beri "sotuvlar tugaydi" va uning o'rniga PCM8024-k o'rnatildi.[24]4.2 dasturiy ta'minotini yangilaganligi sababli PCM8024-k qisman qo'llab-quvvatlaydi FCoE FIP (FCoE Initiisation Protocol) orqali va shu tariqa Birlashtirilgan tarmoq adapterlari ammo PCM dan farqli o'laroq8428-k uning mahalliy fuqarosi yo'q tolali kanal interfeyslar.

4.2 dasturiy ta'minotidan beri PCM8024-k tashqi 10Gb Ethernet interfeyslari yordamida ularni stack portlari sifatida belgilab qo'yilishi mumkin. Ushbu yangi stacking-variant PCT8024 va PCT8024-f uchun bir xil proshivka versiyasida taqdim etilgan bo'lsa ham, pichoq (PCM) va rack (PCT) versiyalarini bitta stakka yig'ish mumkin emas, yangi xususiyatlar mavjud emas "asl" PCM8024. PCM8024 uchun dasturiy ta'minot 4.2.x faqat xatolarni tuzatdi: "sotuv tugashi" modellariga yangi xususiyatlar yoki yangi funksiyalar qo'shilmaydi.[25][26]

PCM8024 dan foydalanish uchun-k kalitlarga KR yoki IEEE 802.3ap standartlarini qo'llab-quvvatlaydigan orqa panel kerak bo'ladi[21][22]

Powerconnect imkoniyatlari

Barcha PowerConnect M-seriyali ("PCM") kalitlar ko'p qavatli kalitlarga ega, shuning uchun ikkala qatlamli (Ethernet) variantlar bilan bir qatorda 3-darajali yoki IP-marshrutlash variantlarini taklif etadi.
Modelga qarab, kalitlar ichki qismda 1Gbit / s yoki 10Gbit / s interfeyslarni shassidagi pichoqlar tomon yo'naltiradi. Model-nomidagi "-k" bilan ishlaydigan PowerConnect M seriyali, yordamida 10Gb ichki ulanishlarni taklif qiladi 10GBASE-KR standart. Tashqi interfeyslar asosan ulanish yoki stacking-interfeys sifatida ishlatilishi kerak, ammo tarmoqqa pichoq bo'lmagan serverlarni ulash uchun ham foydalanish mumkin.
Aloqa darajasidagi PCM kalitlarini qo'llab-quvvatlashda havolani birlashtirish: ikkala statik LAG va LACP. Barcha PowerConnect kalitlari sifatida kalitlar ishlaydi RSTP kabi Spanning Tree Protocol, lekin MSTP yoki Multiple Spanning Tree-ni ishlatish ham mumkin. Pichoqlar tomon yo'naltirilgan ichki portlar sukut bo'yicha chekka yoki "portativ" portlar sifatida o'rnatiladi. Yana bir xususiyat - havolaga bog'liqlikdan foydalanish. Masalan, tugmachaning barcha ichki portlari o'chirilishi mumkin bo'lgan tugmachani sozlash mumkin, chunki kalit ajratilganida, u tarmoqning qolgan qismiga ulanishini yo'qotadi.
Barcha PCM kalitlari sof qatlam-2 kalitlari sifatida sozlanishi mumkin yoki ular barcha marshrutizatsiyani bajarishi uchun sozlanishi mumkin: har ikkala yo'naltirilgan tashqi VLAN-lar o'rtasida marshrutlash. Statik marshrutlardan tashqari, kalitlar ham qo'llab-quvvatlaydi OSPF va JOYI JANNATDA BO'LSIN marshrutlash. Kommutatorni marshrutizator sifatida ishlatishda vlan interfeyslarini sozlash va ushbu vlan interfeysiga IP manzilini tayinlash kerak bo'ladi: IP-manzilni to'g'ridan-to'g'ri fizik interfeysga tayinlash mumkin emas.
[23]

Yig'ish

Barcha PowerConnect pichoqli kalitlari, tashqari original PC-M8024, stacked bo'lishi mumkin. Yig'ish uchun yangi PC-M8024-k tugmachasi 4.2 yoki undan yuqori versiyasini o'rnatishi kerak.[27] Printsipial jihatdan bitta oilaning kalitlarini yig'ish mumkin; Shunday qilib, bir nechta PCM6220 yoki bir nechta PCM8024-k ni bir-biriga yig'ish. Faqatgina istisno - bu PCM6348 pichog'ini PCT7024 yoki PCT7048 tokchali kalit bilan birga yig'ish imkoniyati. Yostiqlar bitta M1000e shassisi ichida bir nechta kalitlarni o'z ichiga olishi mumkin, ammo bitta mantiqiy kalitni yaratish uchun turli xil shassilardan kalitlarni yig'ish mumkin.[28]

Force10 kalitlari

MXL 10/40 Gb kalit

Dell Interop 2012 da Las-Vegas Dell birinchi bo'lib e'lon qildi FTOS asoslangan pichoqni almashtirish: Force10 MXL 10 / 40Gpbs pichoqli kalit, keyin esa 10 / 40Gbit / s konsentrator. FTOS MXL 40 Gb 2012 yil 19 iyulda taqdim etilgan.[29] MXL 32 ta ichki 10Gbit / s ulanishlarni (shassidagi har bir pichoq uchun 2 ta port), ikkita QSFP + 40Gbit / s portlarini va ikkita bo'sh kengayish uyasini taqdim etadi, bu esa maksimal 4 ta qo'shimcha QSFP + 40Gbit / s yoki 8 ta 10Gbit / s portlarga imkon beradi. Har bir QSFP + portidan 40 Gbit / s gacha bo'lgan ulanish uchun ulanish (stack) uplink yoki o'chirish kabeli bilan 4 x 10Gbit / s ulanish uchun foydalanish mumkin. Dell taklif qiladi to'g'ridan-to'g'ri biriktirish bir tomonida QSFP + interfeysi va boshqa uchida 4 x SFP + yoki boshqa uchida QSFP + qabul qilgichi va boshqa tomonida SFP + qabul qilgichlariga ulanadigan 4 ta optik tolali juftlik bo'lgan kabellar. bitta mantiqiy kalitga.

Yuqoridagi 2x40 QSFP modulidan tashqari, MXL 4x10Gb SFP + va 4x10GbaseT modullarini ham qo'llab-quvvatlaydi, shuningdek MXL uchun barcha chekilgan kengaytma modullari tokchali N4000 seriyali (fka Power Connector 8100) uchun ishlatilishi mumkin.

MXL kalitlari, shuningdek, Ethernet orqali Fiber Kanalni qo'llab-quvvatlaydi, shunda a bilan server pichoqlari mavjud birlashtirilgan tarmoq adapteri Mezzanine kartasi ikkala ma'lumot uchun Fiber Channel saqlash tizimidan foydalangan holda saqlash sifatida ishlatilishi mumkin. MXL 10/40 Gbit / s pichoqli kalit ishlaydi FTOS[30] va shu sababli Internetsiz birinchi M1000e I / O mahsuloti bo'ladi grafik foydalanuvchi interfeysi. MXL FCoE trafigini yuqori oqim tugmachasiga yo'naltirishi yoki 4 portli 8Gb FC moduli yordamida FCF funktsiyasini bajarishi mumkin, MXLni to'liq FC kalitiga yoki to'g'ridan-to'g'ri FC SAN-ga ulaydi.

I / U yig'uvchisi

2012 yil oktyabr oyida Dell shuningdek, ishlaydigan M1000e shassisi uchun I / U Aggregator-ni ishga tushirdi FTOS. I / O Aggregator pichoqlar tomon 32 ta ichki 10Gb portlar va standart ikkita 40 Gbit / s QSFP + ulanishlarini taklif qiladi va ikkita uzatma uyasini taklif qiladi. Biror kishining talablariga qarab 40 Gb QSFP + portlari, 10 Gb SFP + yoki 1-10 GBaseT mis interfeyslari uchun kengaytirilgan modullarni olish mumkin. Biror kishi tarqatish yoki yadro qatlamiga 16 x 10Gb gacha ulanishlarni tayinlashi mumkin. I / U agregatori qo'llab-quvvatlaydi FCoE va DCB (Ma'lumot markazining ko'prigi ) Xususiyatlari[31]

Cisco kalitlari

Dell shuningdek, ushbu pichoq idishni uchun ba'zi Cisco Catalyst kalitlarini taklif qildi. Cisco asosiy sotuvchilardan blade-tizimlar uchun bir qator kalitlarni taklif qiladi. Dell M1000e muhofazasidan tashqari, Cisco HP, FSC va IBM pichoqli shkaflari uchun o'xshash kalitlarni taqdim etadi.[32]

Dell M1000e uchun chekilgan kommutatsiya uchun ikkita model oralig'i mavjud: (eslatma: Cisco Catalyst 3030 ni ham taklif qiladi, ammo bu kalit eski 8-avlod yoki Gen 9 pichoq tizimi, hozirgi M1000e muhofazasi uchun emas[33])

2017 yilga kelib M1000e shassisi uchun mavjud bo'lgan yagona Cisco I / U qurilmasi Nexus FEX[34]

Katalizator 3032

Katalizator 3032: 16 ta ichki va 4 ta tashqi 1Gb chekilgan interfeyslarga ega 2-darajali kalit, 8 ta tashqi 1Gb interfeyslarni kengaytirish imkoniyati mavjud. O'rnatilgan tashqi portlar 10/100/1000 BaseT mis interfeyslari bilan an RJ45 har biri 2-ni taklif qiladigan kengaytma moduli uyalari yordamida ulagichni va 1 tagacha qo'shimcha 4 ta portni qo'shish mumkin SFP optik tolali yoki Twinax 1Gb ulanish uchun uyalar. Catalyst 3032 stackingni taklif qilmaydi (virtual pichoqni almashtirish)[35]

Katalizator 3130

3130 seriyali kalitlar 16 ichki 1Gb interfeyslarni pichoq-serverlarga taklif qiladi. Yuqori ulanish yoki tashqi ulanish uchun ikkita variant mavjud: 3130G 4 ta o'rnatilgan 10/100 / 1000BaseT RJ-45 uyasi va ikkita modul zonasi, SFP transceiverlari yoki SFP Twinax kabellari yordamida 4 SFP 1Gb uyalariga imkon beradi.[36]

3130X shuningdek, 4 ta tashqi 10/100 / 1000BaseT ulanishini va X2 10Gb ulanishlari uchun ikkita modulni taqdim etadi.[37]

Ikkala 3130 kalitlari ham "yig'ish "yoki" pichoqni virtual almashtirish ". Bitta bitta kalit kabi harakat qilish uchun 8 ta katalizator 3130 kalitlarini yig'ish mumkin. Bu kalitlarni boshqarishni soddalashtirishi va (daraxt daraxti) topologiyasini soddalashtirishi mumkin, chunki kombinatsiyalangan kalitlar faqat bitta kalit yoyilgan daraxt mulohazalar. Bu shuningdek tarmoq menejeriga imkon beradi yig'ma jismoniy jihatdan har xil kommutatorlardan bitta mantiqiy havolaga bog'lanish.[35]3130 kalitlari IP Base-ga mos keladi IOS barcha qatlam 2 va asosiy qatlam 3 yoki marshrutlash imkoniyatlarini taklif qilish. Foydalanuvchilar ushbu asosiy litsenziyani IP Services yoki IP Advanced xizmatlariga yangilashlari mumkin EIGRP, OSPF yoki BGP4 marshrutlash protokollari, IPv6 marshrutizatsiyasi va apparat asosidagi yagona va ko'p yo'nalishli marshrutlash. Ushbu yutuqlar kalitda IOS-ga o'rnatilgan, ammo foydalanuvchi uni yangilashi kerak IP (kengaytirilgan) xizmatlar ushbu parametrlarni ochish uchun litsenziya[38]

Nexus mato kengaytiruvchisi

2013 yil yanvar oyidan boshlab Cisco va Dell a Nexus mato kengaytiruvchisi M1000e shassisi uchun: Nexus B22Dell. Bunday FEX-lar HP va Fujitsu pichoq tizimlari uchun allaqachon mavjud edi, endi esa M1000e pichoq tizimi uchun FEX mavjud. B22Dell-ning chiqarilishi taxminan. Dastlab rejalashtirilgan va e'lon qilingan sanadan 2,5 yil o'tgach: Dell va Cisco o'rtasidagi kelishmovchilik natijasida Cisco 2010 yilda M1000e uchun FEX ishlab chiqarishni to'xtatdi.[39]Mijozlar FEXni yadrodan boshqaradilar Nexus 5500 seriyali kalit.[40]

Boshqa I / U kartalari

M1000e korpusida 6 tagacha kalit yoki boshqa I / U kartalar bo'lishi mumkin. Powerconnect M-series, Ethernet kalitlari bilan bir qatorda, yuqorida aytib o'tilgan Force10 MXL va Cisco Catalyst 3100 kalitlari Dell M1000e shkafida mavjud yoki ulardan foydalanish mumkin:[1][41]

  • Ethernet orqali o'tish modullari ichki server interfeyslarini idishni orqasidagi tashqi interfeysga olib keladi. 1G, 10G-XAUI uchun o'tish modullari mavjud[42] va 10G 10GbaseXR.[43] Barcha o'tish modullari modulning 16 tashqi portiga bog'langan 16 ta ichki interfeyslarni taklif etadi.
  • Emulex 4 yoki 8 Gb tolali kanalni ajratish moduli[1]
  • Brocade 5424 8Gb FC uchun kalit Elyaf kanali asoslangan Saqlash tarmog'i
  • Brocade M6505. 16Gb FC kaliti [44]
  • Dell 4 yoki 8Gb Fiber-kanalli NPIV Port agregatori
  • Mellanoks 2401G va 4001F / Q - InfiniBand Ma'lumotlarning er-xotin tezligi yoki to'rtlikka ma'lumot uzatish modullari Yuqori samarali hisoblash
  • Infiniscale 4: 16 port 40Gb Infiniband kaliti[45]
  • Cisco M7000e Infiniband kaliti 8 ta tashqi DDR portiga ega
  • 4 ta "mahalliy" 8Gb Fiber kanalli interfeyslarga ega bo'lgan Powerconnect 8428-k tugmachasi:

PCM 8428-k Brocade FCoE

PCM8024-k va MXL kaliti Ethernet orqali Fiber Kanalni qo'llab-quvvatlasa-da, u "mahalliy" FCoE kaliti emas: uning Fiber Channel interfeyslari yo'q. Ushbu kalitlarga o'xshash "mahalliy" FCoE kalitiga ulangan bo'lishi kerak Powerconnect B seriyali 8000e (Brocade 8000 kaliti bilan bir xil) yoki a Cisco Nexus PCM8428 - bu 16 x kengaytirilgan Ethernet 10Gb ichki interfeyslarni, 8 x 10Gb (kengaytirilgan) chekilgan tashqi portlarni va boshqalarni taklif qiladigan M1000e muhofazasi uchun chekilgan yagona Fiber Kanaldir. to'g'ridan-to'g'ri FC SAN kontrolleriga yoki markaziy Fiber Kanal kalitiga ulanish uchun to'rtta 8Gb Fiber Channel interfeyslariga.
Kalit mato va tolali kanalli kalit uchun Brocade FC dasturiy ta'minotini va Ethernet tugmachasini sozlash uchun Dökümhane OS-ni ishlaydi.[46]Imkoniyatlari bo'yicha u Powerconnect-B8000 bilan juda taqqoslanadi, faqat formfaktor va Ethernet va FC interfeyslari soni har xil.[1][47]

PowerConnect M5424 / Brocade 5424

Bu Brocade to'liq tolali kanalli kalit. Pichoqlardagi Fiber Channel mezzanine kartasini FC asosidagi saqlash infratuzilmasiga ulash uchun B yoki C matolardan foydalaniladi. M5424 pichoq-serverlarda FC Mezzanine kartalariga ulanadigan 16 ta ichki portni va 8 ta tashqi portni taklif etadi. Zavoddan faqat dastlabki ikkita tashqi port (17 va 18) litsenziyalangan: qo'shimcha ulanishlar qo'shimcha Dynamic Ports On Demand (DPOD) litsenziyalarini talab qiladi. Kalit PowerPC 440EPX protsessorida 667 MGts va 512 MB DDR2 RAM tizim xotirasida ishlaydi. Bundan tashqari u bortida 4 Mb yuklash chirog'i va 512 Mb ixcham flesh xotiraga ega.[48]

Brocade M6505

Yuqoridagi kabi imkoniyatlar, lekin server mezzanine va 8 tashqi uchun 16 X 16Gb FC taklif etadi. Standart litsenziyada 12 ta ulanish mavjud bo'lib, ular barcha 24 ta portni qo'llab-quvvatlash uchun 12 ga ko'paytirilishi mumkin. avtomatik sezish tezligi 2,4,8 va 16 Gb. Umumiy tarmoqli kengligi 384 GB[49]

Brokar 4424

5424 sifatida 4424, shuningdek, 16 ichki va 8 tashqi portlarni taklif qiluvchi Brocade SAN I / O hisoblanadi. Kalit 4 Gbit / s gacha tezlikni qo'llab-quvvatlaydi. Etkazib berilganda, portlarning 12 tasi ishlash uchun litsenziyaga ega va qo'shimcha litsenziyalar bilan barcha 24 ta portni yoqish mumkin. 4424 PowerPC 440GP protsessorida 333 MGts chastotada 256 SDRAM tizim xotirasi, 4 Mb yuklash chirog'i va 256 Mb ixcham flesh xotirasi bilan ishlaydi.[50]

Infiniband

Bir nechta modullar mavjud Infiniband M1000e shassisidagi ulanish. Infiniband, Akademik talabiga binoan yuqori tarmoqli kengligi / kam kechiktirilgan kompyuter ichidagi ulanishni taklif etadi HPC klasterlari, yirik korporativ ma'lumotlar markazlari va bulutli dasturlar.[51]Cisco-dan SFS M7000e InfiniBand kaliti mavjud. Cisco SFS bitta (10) (SDR) yoki ikki kishilik (20Gbit / s) ma'lumotlar tezligi (DDR) va 8 ta DDR tashqi / ulanish portlari uchun 16 ta ichki "autosensing" interfeyslarni taklif etadi. Kommutatsiyaning umumiy quvvati 960 Gbit / s[52]

Boshqa variantlar - Mellanox SwitchX M4001F va M4001Q[53] va M1000e muhofazasi uchun Melanox M2401G 20Gb Infiniband kaliti[54]

M4001 kalitlari 40 GBit / s (M4001Q) yoki 56 Gbit / s (M4001F) ulanish imkoniyatini beradi va 16 tashqi interfeysga ega. QSFP portlar va pichoqlardagi Infiniband Mezzanine kartasiga 16 ta ichki ulanish. As with all other non-Ethernet based switches it can only be installed in the B or C fabric of the M1000e enclosure as the A fabric connects to the "on motherboard" NICs of the blades and they only come as Ethernet NICs or converged Ethernet.

The 2401G offers 24 ports: 16 internal and 8 external ports. Unlike the M4001 switches where the external ports are using QSFP ports for fiber transceivers, the 2401 has CX4 copper cable interfaces. The switching capacity of the M2401 is 960 Gbit/s[54]

The 4001, with 16 internal and 16 external ports at either 40 or 56 Gbit/s offers a switching capacity of 2.56 Tbit/s

Passthrough modules

In some setups one don't want or need switching capabilities in one's enclosure. For example: if only a few of the blade-servers do use fibre-channel storage one don't need a fully manageble FC switch: one just want to be able to connect the 'internal' FC interface of the blade directly to one's (existing) FC infrastructure. A pass-through module has only very limited management capabilities.Other reasons to choose for pass-through instead of 'enclosure switches' could be the wish to have all switching done on a 'one vendor' infrastructure; and if that isn't available as an M1000e module (thus not one of the switches from Dell Powerconnect, Dell Force10 or Cisco) one could go for pass-through modules:

  • 32 port 10/100/1000 Mbit/s gigabit Ethernet pass-through card: connects 16 internal Ethernet interfaces (1 per blade) to an external RJ45 10/100/1000 Mbit/s copper port[55]
  • 32 port 10 Gb NIC version supports 16 internal 10Gb ports with 16 external SFP+ slots
  • 32 port 10 Gb CNA version supports 16 internal 10Gb CNA ports with 16 external CNA's[56]
  • Dell 4 or 8Gb Fibre-channel NPIV Port aggregator
  • Intel/Qlogic offer a QDR Infiniband passthru module for the Dell M1000e chassis, and a mezzanine version of the QLE7340 QDR IB HCA.

Managing enclosure

An M1000e enclosure offers several ways for management. The M1000e offers 'out of band' management: a dedicated VLAN (or even physical LAN) for management. The CMC modules in the enclosure offer management Ethernet interfaces and do not rely on network-connections made via I/O switches in the blade. One would normally connect the Ethernet links on the CMC avoiding a switch in the enclosure. Often a physically isolated LAN is created for management allowing management access to all enclosures even when the entire infrastructure is down. Each M1000e chassis can hold two CMC modules.

Each enclosure can have either one or two CMC controllers and by default one can access the CMC Webgui via https va SSH for command-line access. It is also possible to access the enclosure management via a serial port for CLI access or using a local keyboard, mouse and monitor via the iKVM switch. It is possible to daisy-chain several M1000e enclosures.

Boshqaruv interfeysi

Main page of the CMC Webgui

Below information assumes the use of the Webgui of the M1000e CMC, although all functions are also available via the text-based CLI access.To access the management system one must open the CMC Webgui via https using the out of band management IP address of the CMC. When the enclosure is in 'stand alone' mode one will get a general overview of the entire system: the webgui gives one an overview how the system looks in reality, including the status-leds etc.By default the Ethernet interface of a CMC card will get an address from a DHCP server but it is also possible to configure an IPv4 or IPv6 address via the LED display at the front of the chassis. Once the IP address is set or known the operator can access the webgui using the default root-account that is built in from factory.

Via the CMC management one can configure chassis-related features: management IP addresses, authentication features (local user-list, using RADIUS or Tacacs server), access-options (webgui, cli, serial link, KVM etc.), error-logging (syslog server ), etc.Via the CMC interface one can configure blades in the system and configuring iDRAC access to those servers. Once enabled one can access the iDRAC (and with that the console of the server) via this webgui or directly opening the webgui of the iDRAC.

The same applies to the I/O modules in the rear of the system: via the CMC one can assign an IP address to the I/O module in one of the 6 slots and then surf to the webgui of that module (if there is a web-based gui: unmanaged pass-through modules won't offer a webgui as there is nothing to configure.

LCD displey

On the front-side of the chassis there is a small hidden LCD screen with 3 buttons: one 4 way directional button allowing one to navigate through the menus on the screen and two "on/off" push buttons which work as an "OK" or "Escape" button. The screen can be used to check the status of the enclosure and the modules in it: one can for example check active alarms on the system, get the IP address of the CMC of KVM, check the system-names etc. Especially for an environment where there are more enclosures in one datacenter it can be useful to check if one are working on the correct enclosure. Unlike the rack or tower-servers there are only a very limited set of indicators on individual servers: a blade server has a power-led and (local) disc-activity led's but no LCD display offering one any alarms, hostnames etc. Nor are there LED's for I/O activity: this is all combined in this little screen giving one information on both the enclosure as well as information over the inserted servers, switches, fans, power-supplies etc.The LCD screen can also be used for the initial configuration of an unconfigured chassis. One can use the LCD screen to set the interface-language and to set the IP address of the CMC for further CLI or web-based configuration.[2]During normal operation the display can be "pushed" into the chassis and is mainly hidden. To use it one would need to pull it out and tilt it to read the screen and have access to the buttons.

Blade 17: Local management I/O

A blade-system is not really designed for local (on-site) management and nearly all communication with the modules in the enclosure and the enclosure itself are done via the "CMC" card(s) at the back of the enclosure. At the front-side of the chassis, directly adjacent to the power-button, one can connect a local terminal: a standard VGA monitor connector and two USB ulagichlar. This connection is referred to inside the system as 'blade 17' and allows one a local interface to the CMC management cards.[2]

iDRAC remote access

Apart from normal operation access to one's blade servers (e.g. SSH sessions to a Linux-based OS, RDP to a Windows-based OS etc.) there are roughly two ways to manage one's server blades: via the iDRAC function or via the iKVM switch. Each blade in the enclosure comes with a built-in iDRAC that allows one to access the console over an IP connection. The iDRAC on a blade-server works in the same way as an iDRAC card on a rack or tower-server: there is a special iDRAC network to get access to the iDRAC function. In rack or tower-servers a dedicated iDRAC Ethernet interface connects to a management LAN. On blade-servers it works the same: via the CMC one configure the setup of iDRAC and access to the iDRAC of a blade is NOT linked to any of the on-board NICs: if all one's server NICs would be down (thus all the on-motherboard NICs and also the Mezzanine B and C) one can still access the iDRAC.

iKVM: Remote console access

Apart from that, one can also connect a keyboard, mouse and monitor directly to the server: on a rack or tower switch one would either connect the I/O devices when needed or one have all the servers connected to a KVM tugmasi. The same is possible with servers in a blade-enclosure: via the optional iKVM module in an enclosure one can access each of one's 16 blades directly. It is possible to include the iKVM switch in an existing network of digital or analog KVM switches. The iKVM switch in the Dell enclosure is an Avocent switch and one can connect (tier) the iKVM module to other digital KVM switches such as the Dell 2161 and 4161 or Avocent DSR digital switches. Also tiering the iKVM to analog KVM switches as the Dell 2160AS or 180AS or other Avocent (compatible) KVM switches is possible.[2] Unlike the CMC, the iKVM switch is not redundant but as one can always access a server (also) via its iDRAC any outage of the KVM switch doesn't stop one from accessing the server-console.

Flex addresses

The M1000e enclosure offers the option of flex-addresses. This feature allows the system administrators to use dedicated or fixed MAC manzillari va World Wide Names (WWN) that are linked to the chassis, the position of the blade and location of the I/O interface. It allows administrators to physically replace a server-blade and/or a Mezzanine card while the system will continue to use the same MAC addresses and/or WWN for that blade without the need to manually change any MAC or WWN addresses and avoid the risk of introducing duplicate addresses: with flex-addresses the system will assign a globally unique MAC/WWN based on the location of that interface in the chassis.The flex-addresses are stored on a SD -card that is inserted in the CMC module of a chassis and when used it overwrites the address burned in into the interfaces of the blades in the system.[2]

Power and cooling

The M1000e enclosure is, as most blade systems, for IT infrastructures demanding high availability. (Nearly) everything in the enclosure supports redundant operation: each of the 3 I/O fabrics (A, B and C) support two switches or pass-through cards and it supports two CMC controllers, even though one can run the chassis with only one CMC.Also power and cooling is redundant: the chassis supports up to six power-supplies and nine fan units. All power supplies and fan-units are inserted from the back and are all hot-swappable.[2]The power supplies are located at the bottom of the enclosure while the fan-units are located next to and in between the switch or I/O modules.Each power-supply is a 2700-watt power-supply and uses 208–240 V AC as input voltage. A chassis can run with at least two power-supplies (2+0 non-redundant configuration). Depending on the required redundancy one can use a 2+2 or 3+3 setup (input redundancy where one would connect each group of supplies to two different power sources) or a 3+1, 4+2 or 5+1 setup, which gives protection if one power-supply unit would fail - but not for losing an entire AC power group[1]

Adabiyotlar

  1. ^ a b v d e Dell website Tech specs for the M1000e, visited 10 March 2013
  2. ^ a b v d e f g h Dell support website M1000e owners manual, retrieved 26 October 2012
  3. ^ PowerEdge M1000e Installation Guide, Revision A05, page 47-51. Date: March 2011. Retrieved: 25 January 2013
  4. ^ a b "Details on the Dell PowerEdge M420 Blade Server". BladesMadeSimple.com. 2012 yil 22-may. Olingan 29 yanvar, 2017.
  5. ^ "Dell PowerEdge M420 Blade Server - Dell". Dell.com. Olingan 29 yanvar, 2017.
  6. ^ a b Dell website: Poweredge M630 Technical specifications, visited 29 August 2016.
  7. ^ Tech Specs brochure PowerEdge M610, updated 20 December 2011
  8. ^ Technical specs of the Dell PowerEdge M610x, 2011 yil 20-dekabrda olingan
  9. ^ Overview of technical specifications of the Poweredge M620, visited 12 June 2012
  10. ^ Dell website announcing G12 servers with details on virtualisation Arxivlandi 2012-06-14 da Orqaga qaytish mashinasi, visited 12 June 2012
  11. ^ Tech Specs brochure PowerEdge M710, retrieved 27 June 2011
  12. ^ Tech Specs for Power Edge PowerEdge M710HD, 2011 yil 20-dekabrda olingan
  13. ^ Dell website: Poweredge M820 Technical specifications, visited 28 July 2012
  14. ^ Technical specs on the:M910, 2011 yil 20-dekabrda olingan
  15. ^ Dell website with technical specification of the M915 blade, 2011 yil 20-dekabrda olingan
  16. ^ Footnote:Except the PE M420 which only supports one Mezzanine card: The PE M420 quarter height blade server only has a Mezzanine B slot
  17. ^ a b Dell support site with an Overview manuals for the M1000e chassis, visited 27 June 2011
  18. ^ Whitepaper on redundant SD card installation of Hypervisors, visited 19 February 2013
  19. ^ Technical specifications of the Equallogic PS M4110 blade array, visited 27 September 2012
  20. ^ a b Dell datasheet for the PS-M4110, downloaded: 2 March 2013
  21. ^ a b Using M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application, retrieved 12 June 2012
  22. ^ a b How to find midplane revision of M1000e, visited 19 September 2012
  23. ^ a b PowerConnect M-series User Guide, firmware 4.x, 2011 yil mart, 2011 yil 26-iyun kuni olingan
  24. ^ Dell website: available blade-switches PCM8024 not listed as available, 2011 yil 29 dekabr
  25. ^ Dell website PCM8024-k, visited 29 December 2012
  26. ^ Release notes page 6 and further included in firmware package PC 4.2.1.3, release-date 2 February 2012, downloaded: 16 February 2012
  27. ^ Stacking the PowerConnect 10G switches, December 2011. Visited 10 March 2013
  28. ^ PCM6348 User Configuration Guide, downloaded 10 March 2013
  29. ^ Dell community website: Dell announces F10 MXL switch, 24 April 2012. Visited 18 May 2012
  30. ^ EWeek: Dell unveils 40GbE Enabled networking switch, 24 April 2012. Visited 18 May 2012
  31. ^ Dell website: PowerEdge M I/O Aggregator, August, 2012. Visited: 26 October 2012
  32. ^ Cisco veb-sayti: Comprehensive Blade Server I/O Solutions, visited: 14 April 2012
  33. ^ Catalyst 3032 for Dell, visited: 14 April 2012
  34. ^ Nexus FEX for M1000e, visited 2 July, 2017
  35. ^ a b Catalyst for Dell at a glance, retrieved: 14 April 2012
  36. ^ Dell website Catalyst 3130G Arxivlandi 2011-06-21 da Orqaga qaytish mashinasi, visited 14 April 2012
  37. ^ Dell website on Catalyst 3130X Arxivlandi 2011-06-21 da Orqaga qaytish mashinasi, visited 14 April 2012
  38. ^ Cisco datasheet on the Catalyst 3130, section: 3130 software. Visited: 14 April 2012
  39. ^ TheRegister veb-sayti: Cisco Nexus 4001d pichog'ini o'chiradi, 16 Fevral 2010. Tashrif buyurilgan: 2013 yil 10 mart
  40. ^ Cisco ma'lumotlar sahifasi: Cisco Nexus B22 Blade Fabric Extender Data Sheet, 2013. Downloaded: 10 March 2013
  41. ^ Manuals and Documents for PowerEdge M1000E, visited 9 March 2013
  42. ^ Usermanual for the 10GbE XAUI passthrough module, 2010, visited: 10 March 2013
  43. ^ Usermanual for the 10 Gb passthrough -k for M1000e, 2011. Visited: 10 March 2013
  44. ^ Brocade M6505 for M1000e chassis, visited 2July, 2017
  45. ^ Userguide for the Infiniscal IV, 2009. Downloaded: 10 March 2013
  46. ^ Dell website Specifications of the M8424 Converged 10Gbe switch, visited 12 October 2012
  47. ^ Details on the PC-B-8000 switch, visited 18 March 2012
  48. ^ "Brocade M5424 Blade Server SAN I/O Module Hardware Reference Manual, September 2008" (PDF). Support.Euro.Dell.com. Olingan 12 oktyabr 2012.
  49. ^ M6505 technical overview, visited 2 July, 2017
  50. ^ Dell manual: Brocade 4424 Blade Server SAN I/O Module Hardware Reference, November 2007. Downloaded: 12 October 2012
  51. ^ Yangiliklar, NO: IDG
  52. ^ Cisco datasheet on the SFS M7000e Infiniband switch, March 2008. Visited: 12 October 2012
  53. ^ Melanox Userguide for the SwitcX M4001 M4001 Infiniband switches, November, 2011. Retrieved: 12 October 2012
  54. ^ a b Melanox userguide for the M2401 Infiniband switch, June, 2008. Visited: 12 October 2012
  55. ^ Dell website Gigabit passthrough module for M-series Arxivlandi 2010-12-18 da Orqaga qaytish mashinasi, visited 26 June 2011
  56. ^ 10Gb Pass Through Specifications, PDF, retrieved 27 June 2011