![The framework of RMNN](./docs/image.png)
# 中文版
## 什么是RMNN?
实际上,RMNN不仅仅可以用来表征矩阵,其实对于张量以及非网格数据都是可以进行表征的。RMNN提供了高级API可以快速利用最新的信号表征方式,如 DMF (Deep Matrix Factorization)[1], INR (Implicit Neural Representation)[2], LRTFR (Low-Rank Tensor Function Representation)[3]. 此外还可以调用各种正则项,如 TV (Total Variation) [4,5], AIR (Adaptive and Implicit Regularization) [6], INRR (Implicit Neural Representation Regularizer) [7]. 用于求解多种逆问题,如数据缺失、去噪等。
## 如何使用?
main.ipynb 给出了示例代码。
## 设计理念
**解耦和模块化的思想**
解码函数、来自代码的数据,以及来自输入和输出的代码中的核心算法。解耦的具体表达式是代码模块化,这在电子产品设计领域是非常常见的。它可以追溯到冯·诺伊曼的系统,其中计算机系统的每个部分都执行自己的职责,以及标准化之间的交互界面。这一步设计对计算机的繁荣和发展具有特殊的意义,让研究人员和厂商的注意力就聚焦在某一个小模块上精益求精,如CPU看因特尔,GPU看英伟达。
**多尺度聚合思想**
多尺度思想在数学和计算机很多,在数学是一个典型的金字塔算法,小波,计算机在一个典型的例子是编程语言的抽象层,从控制硬件水平,然后随后的高级语言,一步一步让操作灵活性减少同时,提高我们的使用方便。
当然,许多高级语言都是在不失去灵活性的情况下寻求方便的。我还没有达到那个级别,所以我可以做的是更清楚地分解代码层次结构,显示一个抽象层,稍后我可以使用它来决定更改哪个级别的代码。
## 参考文献
[1] Sanjeev Arora, Nadav Cohen, Wei Hu and Yuping Luo. “Implicit Regularization in Deep Matrix Factorization” Neural Information Processing Systems(2019): n. pag.
[2] Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell and Gordon Wetzstein. “Implicit Neural Representations with Periodic Activation Functions” Neural Information Processing Systems(2020): n. pag.
[3] Yisi Luo, Xile Zhao, Zhemin Li, Michael K. Ng and Deyu Meng. “Low-Rank Tensor Function Representation for Multi-Dimensional Data Recovery” (2022).
[4] Jian-Feng Cai, Bin Dong, Stanley Osher and Zuowei Shen. “Image restoration: Total variation, wavelet frames, and beyond” Journal of the American Mathematical Society(2012): n. pag.
[5] Zhemin Li, Zhi-Qin John Xu, Tao Luo, Hongxia Wang. A regularised deep matrix factorised model of matrix completion for image restoration[J], IET Image Processing, 2022, 16 (12): 3212-3224.
[6] Zhemin Li, Tao Sun, Hongxia Wang, Bao Wang. Adaptive and Implicit Regularization for Matrix Completion[J]. SIAM Journal on Imaging Sciences, 2022, 15(4): 2000-2022.
[7] Zhemin Li, Hongxia Wang*, Deyu Meng. Regularize implicit neural representation by itself[C]. CVPR 2023.
# English
## What is an RMNN?
In fact, RMNN can not only be used to characterize the matrix, but also can be characterized for both tensors and non-grid data. The RMNN provides an advanced API to quickly leverage the latest signal representations, such as DMF (Deep Matrix Factorization) [1], INR (Implicit Neural Representation) [2], and LRTFR (Low-Rank Tensor Function Representation) [3]. You can also call various rules items, such as TV (Total Variation) [4,5], AIR (Adaptive and Implicit Regularization) [6], and INRR (Implicit Neural Representation Regularizer) [7]. To solve multiple inverse problems, such as data missing, denoising, etc.
## how to use?
main.ipynb The sample code is given.
## Design idea
1. Idea of decoupling and modularization
Decouple functions, data from code, and core algorithms within code from input and output. The advantage of decoupling is that the code can grow by iterating through the parts rather than pulling them together. The specific expression of decoupling is code modularization, which is very common in the field of electronic product design. It can be traced back to von Neumann's system, in which each part of the computer system performs its own duties and the interaction interface between standardization. This step design is of special significance to the subsequent prosperity and development of the computer.
2. Multi-scale aggregation idea
Multi-scale thought in mathematics and computer are many, in mathematics is a typical pyramid algorithm, wavelet, the computer inside a typical example is the abstraction layers of the programming language, from the controlling hardware level, and then to subsequent high-level language, step by step is let's operating flexibility to reduce at the same time, improve our use convenience.
Of course, many high-level languages strive for convenience without losing flexibility. I'm not at that level yet, so what I can do is break down the code hierarchy more clearly, showing a layer of abstraction that I can use later to decide which level of code to change.
Raw data
{
"_id": null,
"home_page": "https://gitee.com/lizhemin15/rmnn-pip",
"name": "rmnn",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "matrix representation",
"author": "Zhemin Li",
"author_email": "lizhemin15@163.com",
"download_url": "https://files.pythonhosted.org/packages/f2/4c/a4e03b74dc4c084ad8f4fa8e5d3b10ec7aeae300a6ff8d8a2f4f3ccdb9ce/rmnn-0.0.2.tar.gz",
"platform": null,
"description": "![The framework of RMNN](./docs/image.png)\n# \u4e2d\u6587\u7248\n## \u4ec0\u4e48\u662fRMNN\uff1f\n\u5b9e\u9645\u4e0a\uff0cRMNN\u4e0d\u4ec5\u4ec5\u53ef\u4ee5\u7528\u6765\u8868\u5f81\u77e9\u9635\uff0c\u5176\u5b9e\u5bf9\u4e8e\u5f20\u91cf\u4ee5\u53ca\u975e\u7f51\u683c\u6570\u636e\u90fd\u662f\u53ef\u4ee5\u8fdb\u884c\u8868\u5f81\u7684\u3002RMNN\u63d0\u4f9b\u4e86\u9ad8\u7ea7API\u53ef\u4ee5\u5feb\u901f\u5229\u7528\u6700\u65b0\u7684\u4fe1\u53f7\u8868\u5f81\u65b9\u5f0f\uff0c\u5982 DMF (Deep Matrix Factorization)[1], INR (Implicit Neural Representation)[2], LRTFR (Low-Rank Tensor Function Representation)[3]. \u6b64\u5916\u8fd8\u53ef\u4ee5\u8c03\u7528\u5404\u79cd\u6b63\u5219\u9879\uff0c\u5982 TV (Total Variation) [4,5], AIR (Adaptive and Implicit Regularization) [6], INRR (Implicit Neural Representation Regularizer) [7]. \u7528\u4e8e\u6c42\u89e3\u591a\u79cd\u9006\u95ee\u9898\uff0c\u5982\u6570\u636e\u7f3a\u5931\u3001\u53bb\u566a\u7b49\u3002\n\n## \u5982\u4f55\u4f7f\u7528\uff1f\nmain.ipynb \u7ed9\u51fa\u4e86\u793a\u4f8b\u4ee3\u7801\u3002\n\n## \u8bbe\u8ba1\u7406\u5ff5\n**\u89e3\u8026\u548c\u6a21\u5757\u5316\u7684\u601d\u60f3**\n\u89e3\u7801\u51fd\u6570\u3001\u6765\u81ea\u4ee3\u7801\u7684\u6570\u636e\uff0c\u4ee5\u53ca\u6765\u81ea\u8f93\u5165\u548c\u8f93\u51fa\u7684\u4ee3\u7801\u4e2d\u7684\u6838\u5fc3\u7b97\u6cd5\u3002\u89e3\u8026\u7684\u5177\u4f53\u8868\u8fbe\u5f0f\u662f\u4ee3\u7801\u6a21\u5757\u5316\uff0c\u8fd9\u5728\u7535\u5b50\u4ea7\u54c1\u8bbe\u8ba1\u9886\u57df\u662f\u975e\u5e38\u5e38\u89c1\u7684\u3002\u5b83\u53ef\u4ee5\u8ffd\u6eaf\u5230\u51af\u00b7\u8bfa\u4f0a\u66fc\u7684\u7cfb\u7edf\uff0c\u5176\u4e2d\u8ba1\u7b97\u673a\u7cfb\u7edf\u7684\u6bcf\u4e2a\u90e8\u5206\u90fd\u6267\u884c\u81ea\u5df1\u7684\u804c\u8d23\uff0c\u4ee5\u53ca\u6807\u51c6\u5316\u4e4b\u95f4\u7684\u4ea4\u4e92\u754c\u9762\u3002\u8fd9\u4e00\u6b65\u8bbe\u8ba1\u5bf9\u8ba1\u7b97\u673a\u7684\u7e41\u8363\u548c\u53d1\u5c55\u5177\u6709\u7279\u6b8a\u7684\u610f\u4e49\uff0c\u8ba9\u7814\u7a76\u4eba\u5458\u548c\u5382\u5546\u7684\u6ce8\u610f\u529b\u5c31\u805a\u7126\u5728\u67d0\u4e00\u4e2a\u5c0f\u6a21\u5757\u4e0a\u7cbe\u76ca\u6c42\u7cbe\uff0c\u5982CPU\u770b\u56e0\u7279\u5c14\uff0cGPU\u770b\u82f1\u4f1f\u8fbe\u3002\n\n**\u591a\u5c3a\u5ea6\u805a\u5408\u601d\u60f3**\n\u591a\u5c3a\u5ea6\u601d\u60f3\u5728\u6570\u5b66\u548c\u8ba1\u7b97\u673a\u5f88\u591a\uff0c\u5728\u6570\u5b66\u662f\u4e00\u4e2a\u5178\u578b\u7684\u91d1\u5b57\u5854\u7b97\u6cd5\uff0c\u5c0f\u6ce2\uff0c\u8ba1\u7b97\u673a\u5728\u4e00\u4e2a\u5178\u578b\u7684\u4f8b\u5b50\u662f\u7f16\u7a0b\u8bed\u8a00\u7684\u62bd\u8c61\u5c42\uff0c\u4ece\u63a7\u5236\u786c\u4ef6\u6c34\u5e73\uff0c\u7136\u540e\u968f\u540e\u7684\u9ad8\u7ea7\u8bed\u8a00\uff0c\u4e00\u6b65\u4e00\u6b65\u8ba9\u64cd\u4f5c\u7075\u6d3b\u6027\u51cf\u5c11\u540c\u65f6\uff0c\u63d0\u9ad8\u6211\u4eec\u7684\u4f7f\u7528\u65b9\u4fbf\u3002\n\u5f53\u7136\uff0c\u8bb8\u591a\u9ad8\u7ea7\u8bed\u8a00\u90fd\u662f\u5728\u4e0d\u5931\u53bb\u7075\u6d3b\u6027\u7684\u60c5\u51b5\u4e0b\u5bfb\u6c42\u65b9\u4fbf\u7684\u3002\u6211\u8fd8\u6ca1\u6709\u8fbe\u5230\u90a3\u4e2a\u7ea7\u522b\uff0c\u6240\u4ee5\u6211\u53ef\u4ee5\u505a\u7684\u662f\u66f4\u6e05\u695a\u5730\u5206\u89e3\u4ee3\u7801\u5c42\u6b21\u7ed3\u6784\uff0c\u663e\u793a\u4e00\u4e2a\u62bd\u8c61\u5c42\uff0c\u7a0d\u540e\u6211\u53ef\u4ee5\u4f7f\u7528\u5b83\u6765\u51b3\u5b9a\u66f4\u6539\u54ea\u4e2a\u7ea7\u522b\u7684\u4ee3\u7801\u3002\n\n## \u53c2\u8003\u6587\u732e\n[1] Sanjeev Arora, Nadav Cohen, Wei Hu and Yuping Luo. \u201cImplicit Regularization in Deep Matrix Factorization\u201d Neural Information Processing Systems(2019): n. pag.\n\n[2] Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell and Gordon Wetzstein. \u201cImplicit Neural Representations with Periodic Activation Functions\u201d Neural Information Processing Systems(2020): n. pag.\n\n[3] Yisi Luo, Xile Zhao, Zhemin Li, Michael K. Ng and Deyu Meng. \u201cLow-Rank Tensor Function Representation for Multi-Dimensional Data Recovery\u201d (2022).\n\n[4] Jian-Feng Cai, Bin Dong, Stanley Osher and Zuowei Shen. \u201cImage restoration: Total variation, wavelet frames, and beyond\u201d Journal of the American Mathematical Society(2012): n. pag.\n\n[5] Zhemin Li, Zhi-Qin John Xu, Tao Luo, Hongxia Wang. A regularised deep matrix factorised model of matrix completion for image restoration[J], IET Image Processing, 2022, 16 (12): 3212-3224.\n\n[6] Zhemin Li, Tao Sun, Hongxia Wang, Bao Wang. Adaptive and Implicit Regularization for Matrix Completion[J]. SIAM Journal on Imaging Sciences, 2022, 15(4): 2000-2022.\n\n[7] Zhemin Li, Hongxia Wang*, Deyu Meng. Regularize implicit neural representation by itself[C]. CVPR 2023.\n\n\n\n# English\n## What is an RMNN?\nIn fact, RMNN can not only be used to characterize the matrix, but also can be characterized for both tensors and non-grid data. The RMNN provides an advanced API to quickly leverage the latest signal representations, such as DMF (Deep Matrix Factorization) [1], INR (Implicit Neural Representation) [2], and LRTFR (Low-Rank Tensor Function Representation) [3]. You can also call various rules items, such as TV (Total Variation) [4,5], AIR (Adaptive and Implicit Regularization) [6], and INRR (Implicit Neural Representation Regularizer) [7]. To solve multiple inverse problems, such as data missing, denoising, etc.\n## how to use?\nmain.ipynb The sample code is given.\n## Design idea\n\n1. Idea of decoupling and modularization\n\nDecouple functions, data from code, and core algorithms within code from input and output. The advantage of decoupling is that the code can grow by iterating through the parts rather than pulling them together. The specific expression of decoupling is code modularization, which is very common in the field of electronic product design. It can be traced back to von Neumann's system, in which each part of the computer system performs its own duties and the interaction interface between standardization. This step design is of special significance to the subsequent prosperity and development of the computer.\n\n2. Multi-scale aggregation idea\n\nMulti-scale thought in mathematics and computer are many, in mathematics is a typical pyramid algorithm, wavelet, the computer inside a typical example is the abstraction layers of the programming language, from the controlling hardware level, and then to subsequent high-level language, step by step is let's operating flexibility to reduce at the same time, improve our use convenience.\n\nOf course, many high-level languages strive for convenience without losing flexibility. I'm not at that level yet, so what I can do is break down the code hierarchy more clearly, showing a layer of abstraction that I can use later to decide which level of code to change.\n\n\n\n\n\n\n\n",
"bugtrack_url": null,
"license": "GPL-3.0",
"summary": "A small package for represent tensors or matrix with neural network based on Pytorch",
"version": "0.0.2",
"split_keywords": [
"matrix",
"representation"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "de17206baa5f719c49aec5da0e2fd5a3124ecee51a96bd6a2edbebf2f2f7679f",
"md5": "e8f5eaf0901cac721f1176d00c4ecc8b",
"sha256": "d418eeab848b2c7f63e84199426a6dc92397db045048e487067ff61313329e31"
},
"downloads": -1,
"filename": "rmnn-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e8f5eaf0901cac721f1176d00c4ecc8b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 22096,
"upload_time": "2023-03-27T00:49:30",
"upload_time_iso_8601": "2023-03-27T00:49:30.796832Z",
"url": "https://files.pythonhosted.org/packages/de/17/206baa5f719c49aec5da0e2fd5a3124ecee51a96bd6a2edbebf2f2f7679f/rmnn-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f24ca4e03b74dc4c084ad8f4fa8e5d3b10ec7aeae300a6ff8d8a2f4f3ccdb9ce",
"md5": "5371650975e50bd78058b9d4185d4678",
"sha256": "0b3cc97d345559d3ad9fa35d98c79556f014492038594b32aa5c09263faef678"
},
"downloads": -1,
"filename": "rmnn-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "5371650975e50bd78058b9d4185d4678",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 20971,
"upload_time": "2023-03-27T00:49:33",
"upload_time_iso_8601": "2023-03-27T00:49:33.542731Z",
"url": "https://files.pythonhosted.org/packages/f2/4c/a4e03b74dc4c084ad8f4fa8e5d3b10ec7aeae300a6ff8d8a2f4f3ccdb9ce/rmnn-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-03-27 00:49:33",
"github": false,
"gitlab": false,
"bitbucket": false,
"lcname": "rmnn"
}