透過您的圖書館登入
IP:18.224.200.110
  • 學位論文

利用深度學習恢復毫秒等級雙光子螢光三維顯微鏡之影像品質

Deep Learning for Image Restoration in Millisecond-scale Two-photon Fluorescence Volumetric Microscopy

指導教授 : 朱士維
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


大腦迷人又神秘,即便人們已經對單個神經元研究透徹,但我們對大腦功能的理解仍然有限,這種限制來自大腦功能並非由單一神經元控制,而是交錯複雜的神經網路。果蠅作為許多研究的模式生物,除了具有相當完整的神經結構性圖譜,果蠅腦也足夠小來達成全腦掃描,在近期研究中為了瞭解果蠅神經如何形成功能性網路,我們建立一個三維高速體積成像系統,用以捕捉三維空間中神經元之間高速的訊號傳遞。 我們通過將多焦點雙光子顯微鏡與可調式聲波折射率漸層透鏡相結合,展示了具有每秒大於500個體積的高速體積成像能力。與其他的單點掃描方案相比,多焦點設計提供了更高的成像速度,與可調式聲波折射率漸層透鏡的搭配使得系統可以在神經密集的的環境中取得三維影像。與廣域照明相比,多焦點系統可顯著提高散射組織的訊雜比。此外,我們使用了一個 32 通道的光電倍增管,可以提供 10GB/s 的信號吞吐量。 這種高速螢光顯微鏡看似讓我們可以從果蠅腦中提取出完整的時空資訊,然而,所有的螢光顯微鏡都會受到一個根本的限制­,「永恆的妥協四邊形」,即由空間解析度、成像速度、對比度、和成像深度組成,這四個關鍵參數,若要優化其中一項,則往往必須犧牲另外一項或多項的表現。這是由於樣本的健康、螢光團的化學限制、以及光學元件的損耗,造成用來成像的光子預算有限。在我們的系統中,大幅提高了成像速度,然而,在保持在細胞等級解析度下以及足夠的成像深度下,對比度便被大幅犧牲。 在這項研究中,我們試著利用深度學習的模型來恢復影像對比,提高訊雜比,從而減輕關鍵參數彼此的牽制。這種圖像恢復模型基於U-net的深度學習網路結構,並通過半模擬的數據來訓練,這些數據是由真實的高速數據乘上模擬的動態訊號,模仿果蠅大腦內部的螢光強度變化。經過訓練的模型,可以將高速但低對比的影像恢復成高速且高對比的影像,在保有高時間解析度的前提下有效提升影像的訊雜比。這一結果讓我們這套高速雙光子螢光體積顯微系統的缺點被抵銷,使得這套系統更有潛力從果蠅腦中提取出可讀取的高速時空資訊。

並列摘要


The brain is fascinating and mysterious. Even though a single neuron has been thoroughly studied, our understanding of the function of the brain is still limited. This limitation comes from emergent properties of connections among numerous neurons. Drosophila, as a model animal for brain studies, has a fairly complete neural structural map and its brain is small enough to enable whole-brain optical imaging with sub-cellular resolution. To develop a tool for mapping the functional connectome of Drosophila brains, volumetric acquisition with millisecond temporal resolution is necessary. We demonstrated two-photon microscopy with >500 volumes/second scanning speed by combining multifocal multiphoton microscopy with a tunable acoustic gradient (TAG) lens. Compared to other laser scanning schemes, the multifocal design provides much higher imaging speed, and the TAG lens provides full three-dimensional resolution with dense neurons. Compared to wide-field schemes, the multifocal system plus deconvolution offers a significantly improved signal-to-noise ratio (SNR) in scattering tissues. In addition, we used a 32-channel photomultiplier tube that could deliver signal throughput at 10GB/s. This high-speed fluorescence microscopy allows us to extract complete spatiotemporal information from the Drosophila brain. However, all fluorescence microscopy techniques suffer from the so-called “eternal quadrilateral of compromise”, which is composed of spatial resolution, imaging speed, contrast, and depth. These four fundamental imaging factors cannot be optimized alone without compromising another at the same time. The underlying mechanisms of the compromise are due to the limited photon budget tolerated by sample intactness, the chemistry of fluorophores, and the optics of the microscope. In our system, the SNR contrast is sacrificed due to the dramatic increase in imaging speed with maintaining cellular resolution and specific depth. In this work, we mitigated these limitations via deep learning image restoration to enhance SNR. The deep-learning model was trained on semi-simulated data, which is composed of the real high-speed data multiplied with a simulated dynamic mask to mimic the functional changes in fluorescent intensity inside a Drosophila brain. The trained model restored high-speed but low-contrast images into high-speed and high-contrast images, effectively improving the SNR of images while maintaining high temporal resolution. This result showed the fundamental limitation of this high-speed two-photon fluorescent volumetric microscopy was alleviated and made it more potential to extract useful high-speed spatiotemporal information from the Drosophila brain.

參考文獻


Lee WM, McMenamin T, Li Y. Optical toolkits forin vivodeep tissue laser scanning microscopy: a primer. Journal of Optics. 2018. p. 063002. doi:10.1088/2040-Péresse T, Gautier A. Next-Generation Fluorogen-Based Reporters and Biosensors for Advanced Bioimaging. Int J Mol Sci. 2019;20. doi:10.3390/ijms20246142
Huang S-H, Irawati N, Chien Y-F, Lin J-Y, Tsai Y-H, Wang P-Y, et al. Optical volumetric brain imaging: speed, depth, and resolution enhancement. Journal of Physics D: Applied Physics. 2021. p. 323002. doi:10.1088/1361-6463/abff7b
Huisken J, Swoger J, Del Bene F, Wittbrodt J, Stelzer EHK. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science. 2004;305: 1007–1009.
Tomer R, Khairy K, Amat F, Keller PJ. Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy. Nat Methods. 2012;9: 755–763.
Chen B-C, Legant WR, Wang K, Shao L, Milkie DE, Davidson MW, et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science. 2014;346: 1257998.

延伸閱讀