# jqfactor_analyzer
**聚宽单因子分析工具开源版**
---
聚宽单因子分析工具开源版是提供给用户进行因子分析的工具,提供了包括计算因子IC值,因子收益,因子换手率等各种详细指标,用户可以按照自己的需求查看因子详情。
## **安装**
```bash
pip install jqfactor_analyzer
```
## **升级**
```bash
pip install -U jqfactor_analyzer
```
## **具体使用方法**
[analyze_factor](https://github.com/JoinQuant/jqfactor_analyzer/blob/master/docs/API%E6%96%87%E6%A1%A3.md): 因子分析函数
## **使用示例**
* ### 示例:5日平均换手率因子分析
```python
# 载入函数库
import pandas as pd
import jqfactor_analyzer as ja
# 获取 jqdatasdk 授权,输入用户名、密码,申请地址:http://t.cn/EINDOxE
# 聚宽官网,使用方法参见:http://t.cn/EINcS4j
import jqdatasdk
jqdatasdk.auth('username', 'password')
# 获取5日平均换手率因子2018-01-01到2018-12-31之间的数据(示例用从库中直接调取)
# 聚宽因子库数据获取方法在下方
from jqfactor_analyzer.sample import VOL5
factor_data = VOL5
# 对因子进行分析
far = ja.analyze_factor(
factor_data, # factor_data 为因子值的 pandas.DataFrame
quantiles=10,
periods=(1, 10),
industry='jq_l1',
weight_method='avg',
max_loss=0.1
)
# 获取整理后的因子的IC值
far.ic
```
结果展示:

```python
# 生成统计图表
far.create_full_tear_sheet(
demeaned=False, group_adjust=False, by_group=False,
turnover_periods=None, avgretplot=(5, 15), std_bar=False
)
```
结果展示:

## 获取聚宽因子库数据的方法
1. [聚宽因子库](https://www.joinquant.com/help/api/help?name=factor_values)包含数百个质量、情绪、风险等其他类目的因子
2. 连接jqdatasdk获取数据包,数据接口需调用聚宽 [`jqdatasdk`](https://github.com/JoinQuant/jqdatasdk/blob/master/README.md) 接口获取金融数据([试用注册地址](http://t.cn/EINDOxE))
```python
# 获取因子数据:以5日平均换手率为例,该数据可以直接用于因子分析
# 具体使用方法可以参照jqdatasdk的API文档
import jqdatasdk
jqdatasdk.auth('username', 'password')
# 获取聚宽因子库中的VOL5数据
factor_data=jqdatasdk.get_factor_values(
securities=jqdatasdk.get_index_stocks('000300.XSHG'),
factors=['VOL5'],
start_date='2018-01-01',
end_date='2018-12-31')['VOL5']
```
## 将自有因子值转换成 DataFrame 格式的数据
* index 为日期,格式为 pandas 日期通用的 DatetimeIndex
* columns 为股票代码,格式要求符合聚宽的代码定义规则(如:平安银行的股票代码为 000001.XSHE)
* 如果是深交所上市的股票,在股票代码后面需要加入.XSHE
* 如果是上交所上市的股票,在股票代码后面需要加入.XSHG
* 将 pandas.DataFrame 转换成满足格式要求数据格式
首先要保证 index 为 `DatetimeIndex` 格式
一般是通过 pandas 提供的 [`pandas.to_datetime`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html) 函数进行转换, 在转换前应确保 index 中的值都为合理的日期格式, 如 `'2018-01-01'` / `'20180101'`, 之后再调用 `pandas.to_datetime` 进行转换
另外应确保 index 的日期是按照从小到大的顺序排列的, 可以通过 [`sort_index`](https://pandas.pydata.org/pandas-docs/version/0.23.3/generated/pandas.DataFrame.sort_index.html) 进行排序
最后请检查 columns 中的股票代码是否都满足聚宽的代码定义
```python
import pandas as pd
sample_data = pd.DataFrame(
[[0.84, 0.43, 2.33, 0.86, 0.96],
[1.06, 0.51, 2.60, 0.90, 1.09],
[1.12, 0.54, 2.68, 0.94, 1.12],
[1.07, 0.64, 2.65, 1.33, 1.15],
[1.21, 0.73, 2.97, 1.65, 1.19]],
index=['2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-08'],
columns=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']
)
print(sample_data)
factor_data = sample_data.copy()
# 将 index 转换为 DatetimeIndex
factor_data.index = pd.to_datetime(factor_data.index)
# 将 DataFrame 按照日期顺序排列
factor_data = factor_data.sort_index()
# 检查 columns 是否满足聚宽股票代码格式
if not sample_data.columns.astype(str).str.match('\d{6}\.XSH[EG]').all():
print("有不满足聚宽股票代码格式的股票")
print(sample_data.columns[~sample_data.columns.astype(str).str.match('\d{6}\.XSH[EG]')])
print(factor_data)
```
* 将键为日期, 值为各股票因子值的 `Series` 的 `dict` 转换成 `pandas.DataFrame`
可以直接利用 `pandas.DataFrame` 生成
```python
sample_data = \
{'2018-01-02': pd.Seris([0.84, 0.43, 2.33, 0.86, 0.96],
index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']),
'2018-01-03': pd.Seris([1.06, 0.51, 2.60, 0.90, 1.09],
index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']),
'2018-01-04': pd.Seris([1.12, 0.54, 2.68, 0.94, 1.12],
index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']),
'2018-01-05': pd.Seris([1.07, 0.64, 2.65, 1.33, 1.15],
index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']),
'2018-01-08': pd.Seris([1.21, 0.73, 2.97, 1.65, 1.19],
index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE'])}
import pandas as pd
# 直接调用 pd.DataFrame 将 dict 转换为 DataFrame
factor_data = pd.DataFrame(data).T
print(factor_data)
# 之后请按照 DataFrame 的方法转换成满足格式要求数据格式
```
Raw data
{
"_id": null,
"home_page": "https://www.joinquant.com",
"name": "jqfactor-analyzer",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "JoinQuant",
"author_email": "xlx@joinquant.com",
"download_url": null,
"platform": "all",
"description": "\n\n# jqfactor_analyzer\n\n**\u805a\u5bbd\u5355\u56e0\u5b50\u5206\u6790\u5de5\u5177\u5f00\u6e90\u7248**\n\n---\n\n\u805a\u5bbd\u5355\u56e0\u5b50\u5206\u6790\u5de5\u5177\u5f00\u6e90\u7248\u662f\u63d0\u4f9b\u7ed9\u7528\u6237\u8fdb\u884c\u56e0\u5b50\u5206\u6790\u7684\u5de5\u5177\uff0c\u63d0\u4f9b\u4e86\u5305\u62ec\u8ba1\u7b97\u56e0\u5b50IC\u503c\uff0c\u56e0\u5b50\u6536\u76ca\uff0c\u56e0\u5b50\u6362\u624b\u7387\u7b49\u5404\u79cd\u8be6\u7ec6\u6307\u6807\uff0c\u7528\u6237\u53ef\u4ee5\u6309\u7167\u81ea\u5df1\u7684\u9700\u6c42\u67e5\u770b\u56e0\u5b50\u8be6\u60c5\u3002\n\n## **\u5b89\u88c5**\n\n```bash\npip install jqfactor_analyzer\n```\n\n\n\n## **\u5347\u7ea7**\n\n```bash\npip install -U jqfactor_analyzer\n```\n\n\n\n## **\u5177\u4f53\u4f7f\u7528\u65b9\u6cd5**\n\n[analyze_factor](https://github.com/JoinQuant/jqfactor_analyzer/blob/master/docs/API%E6%96%87%E6%A1%A3.md): \u56e0\u5b50\u5206\u6790\u51fd\u6570\n\n\n\n## **\u4f7f\u7528\u793a\u4f8b**\n\n* ### \u793a\u4f8b\uff1a5\u65e5\u5e73\u5747\u6362\u624b\u7387\u56e0\u5b50\u5206\u6790\n\n```python\n# \u8f7d\u5165\u51fd\u6570\u5e93\nimport pandas as pd\nimport jqfactor_analyzer as ja\n\n# \u83b7\u53d6 jqdatasdk \u6388\u6743\uff0c\u8f93\u5165\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u7533\u8bf7\u5730\u5740\uff1ahttp://t.cn/EINDOxE\n# \u805a\u5bbd\u5b98\u7f51\uff0c\u4f7f\u7528\u65b9\u6cd5\u53c2\u89c1\uff1ahttp://t.cn/EINcS4j\nimport jqdatasdk\njqdatasdk.auth('username', 'password')\n\n# \u83b7\u53d65\u65e5\u5e73\u5747\u6362\u624b\u7387\u56e0\u5b502018-01-01\u52302018-12-31\u4e4b\u95f4\u7684\u6570\u636e\uff08\u793a\u4f8b\u7528\u4ece\u5e93\u4e2d\u76f4\u63a5\u8c03\u53d6\uff09\n# \u805a\u5bbd\u56e0\u5b50\u5e93\u6570\u636e\u83b7\u53d6\u65b9\u6cd5\u5728\u4e0b\u65b9\nfrom jqfactor_analyzer.sample import VOL5\nfactor_data = VOL5\n\n# \u5bf9\u56e0\u5b50\u8fdb\u884c\u5206\u6790\nfar = ja.analyze_factor(\n factor_data, # factor_data \u4e3a\u56e0\u5b50\u503c\u7684 pandas.DataFrame\n quantiles=10,\n periods=(1, 10),\n industry='jq_l1',\n weight_method='avg',\n max_loss=0.1\n)\n\n# \u83b7\u53d6\u6574\u7406\u540e\u7684\u56e0\u5b50\u7684IC\u503c\nfar.ic\n```\n\n\u7ed3\u679c\u5c55\u793a\uff1a\n\n\n\n```python\n# \u751f\u6210\u7edf\u8ba1\u56fe\u8868\nfar.create_full_tear_sheet(\n demeaned=False, group_adjust=False, by_group=False,\n turnover_periods=None, avgretplot=(5, 15), std_bar=False\n)\n```\n\n\u7ed3\u679c\u5c55\u793a\uff1a\n\n\n\n## \u83b7\u53d6\u805a\u5bbd\u56e0\u5b50\u5e93\u6570\u636e\u7684\u65b9\u6cd5\n\n1. [\u805a\u5bbd\u56e0\u5b50\u5e93](https://www.joinquant.com/help/api/help?name=factor_values)\u5305\u542b\u6570\u767e\u4e2a\u8d28\u91cf\u3001\u60c5\u7eea\u3001\u98ce\u9669\u7b49\u5176\u4ed6\u7c7b\u76ee\u7684\u56e0\u5b50\n\n2. \u8fde\u63a5jqdatasdk\u83b7\u53d6\u6570\u636e\u5305\uff0c\u6570\u636e\u63a5\u53e3\u9700\u8c03\u7528\u805a\u5bbd [`jqdatasdk`](https://github.com/JoinQuant/jqdatasdk/blob/master/README.md) \u63a5\u53e3\u83b7\u53d6\u91d1\u878d\u6570\u636e([\u8bd5\u7528\u6ce8\u518c\u5730\u5740](http://t.cn/EINDOxE))\n\n ```python\n # \u83b7\u53d6\u56e0\u5b50\u6570\u636e\uff1a\u4ee55\u65e5\u5e73\u5747\u6362\u624b\u7387\u4e3a\u4f8b\uff0c\u8be5\u6570\u636e\u53ef\u4ee5\u76f4\u63a5\u7528\u4e8e\u56e0\u5b50\u5206\u6790\n # \u5177\u4f53\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u7167jqdatasdk\u7684API\u6587\u6863\n import jqdatasdk\n jqdatasdk.auth('username', 'password')\n # \u83b7\u53d6\u805a\u5bbd\u56e0\u5b50\u5e93\u4e2d\u7684VOL5\u6570\u636e\n factor_data=jqdatasdk.get_factor_values(\n securities=jqdatasdk.get_index_stocks('000300.XSHG'),\n factors=['VOL5'],\n start_date='2018-01-01',\n end_date='2018-12-31')['VOL5']\n ```\n\n\n\n## \u5c06\u81ea\u6709\u56e0\u5b50\u503c\u8f6c\u6362\u6210 DataFrame \u683c\u5f0f\u7684\u6570\u636e\n\n* index \u4e3a\u65e5\u671f\uff0c\u683c\u5f0f\u4e3a pandas \u65e5\u671f\u901a\u7528\u7684 DatetimeIndex\n\n* columns \u4e3a\u80a1\u7968\u4ee3\u7801\uff0c\u683c\u5f0f\u8981\u6c42\u7b26\u5408\u805a\u5bbd\u7684\u4ee3\u7801\u5b9a\u4e49\u89c4\u5219\uff08\u5982\uff1a\u5e73\u5b89\u94f6\u884c\u7684\u80a1\u7968\u4ee3\u7801\u4e3a 000001.XSHE\uff09\n * \u5982\u679c\u662f\u6df1\u4ea4\u6240\u4e0a\u5e02\u7684\u80a1\u7968\uff0c\u5728\u80a1\u7968\u4ee3\u7801\u540e\u9762\u9700\u8981\u52a0\u5165.XSHE\n * \u5982\u679c\u662f\u4e0a\u4ea4\u6240\u4e0a\u5e02\u7684\u80a1\u7968\uff0c\u5728\u80a1\u7968\u4ee3\u7801\u540e\u9762\u9700\u8981\u52a0\u5165.XSHG\n\n* \u5c06 pandas.DataFrame \u8f6c\u6362\u6210\u6ee1\u8db3\u683c\u5f0f\u8981\u6c42\u6570\u636e\u683c\u5f0f\n\n \u9996\u5148\u8981\u4fdd\u8bc1 index \u4e3a `DatetimeIndex` \u683c\u5f0f\n\n \u4e00\u822c\u662f\u901a\u8fc7 pandas \u63d0\u4f9b\u7684 [`pandas.to_datetime`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html) \u51fd\u6570\u8fdb\u884c\u8f6c\u6362, \u5728\u8f6c\u6362\u524d\u5e94\u786e\u4fdd index \u4e2d\u7684\u503c\u90fd\u4e3a\u5408\u7406\u7684\u65e5\u671f\u683c\u5f0f, \u5982 `'2018-01-01'` / `'20180101'`, \u4e4b\u540e\u518d\u8c03\u7528 `pandas.to_datetime` \u8fdb\u884c\u8f6c\u6362\n\n \u53e6\u5916\u5e94\u786e\u4fdd index \u7684\u65e5\u671f\u662f\u6309\u7167\u4ece\u5c0f\u5230\u5927\u7684\u987a\u5e8f\u6392\u5217\u7684, \u53ef\u4ee5\u901a\u8fc7 [`sort_index`](https://pandas.pydata.org/pandas-docs/version/0.23.3/generated/pandas.DataFrame.sort_index.html) \u8fdb\u884c\u6392\u5e8f\n\n \u6700\u540e\u8bf7\u68c0\u67e5 columns \u4e2d\u7684\u80a1\u7968\u4ee3\u7801\u662f\u5426\u90fd\u6ee1\u8db3\u805a\u5bbd\u7684\u4ee3\u7801\u5b9a\u4e49\n\n ```python\n import pandas as pd\n\n sample_data = pd.DataFrame(\n [[0.84, 0.43, 2.33, 0.86, 0.96],\n [1.06, 0.51, 2.60, 0.90, 1.09],\n [1.12, 0.54, 2.68, 0.94, 1.12],\n [1.07, 0.64, 2.65, 1.33, 1.15],\n [1.21, 0.73, 2.97, 1.65, 1.19]],\n index=['2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-08'],\n columns=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']\n )\n\n print(sample_data)\n\n factor_data = sample_data.copy()\n # \u5c06 index \u8f6c\u6362\u4e3a DatetimeIndex\n factor_data.index = pd.to_datetime(factor_data.index)\n # \u5c06 DataFrame \u6309\u7167\u65e5\u671f\u987a\u5e8f\u6392\u5217\n factor_data = factor_data.sort_index()\n # \u68c0\u67e5 columns \u662f\u5426\u6ee1\u8db3\u805a\u5bbd\u80a1\u7968\u4ee3\u7801\u683c\u5f0f\n if not sample_data.columns.astype(str).str.match('\\d{6}\\.XSH[EG]').all():\n print(\"\u6709\u4e0d\u6ee1\u8db3\u805a\u5bbd\u80a1\u7968\u4ee3\u7801\u683c\u5f0f\u7684\u80a1\u7968\")\n print(sample_data.columns[~sample_data.columns.astype(str).str.match('\\d{6}\\.XSH[EG]')])\n\n print(factor_data)\n ```\n\n* \u5c06\u952e\u4e3a\u65e5\u671f, \u503c\u4e3a\u5404\u80a1\u7968\u56e0\u5b50\u503c\u7684 `Series` \u7684 `dict` \u8f6c\u6362\u6210 `pandas.DataFrame`\n\n \u53ef\u4ee5\u76f4\u63a5\u5229\u7528 `pandas.DataFrame` \u751f\u6210\n\n ```python\n sample_data = \\\n {'2018-01-02': pd.Seris([0.84, 0.43, 2.33, 0.86, 0.96],\n index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']),\n '2018-01-03': pd.Seris([1.06, 0.51, 2.60, 0.90, 1.09],\n index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']),\n '2018-01-04': pd.Seris([1.12, 0.54, 2.68, 0.94, 1.12],\n index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']),\n '2018-01-05': pd.Seris([1.07, 0.64, 2.65, 1.33, 1.15],\n index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE']),\n '2018-01-08': pd.Seris([1.21, 0.73, 2.97, 1.65, 1.19],\n index=['000001.XSHE', '000002.XSHE', '000063.XSHE', '000069.XSHE', '000100.XSHE'])}\n\n import pandas as pd\n # \u76f4\u63a5\u8c03\u7528 pd.DataFrame \u5c06 dict \u8f6c\u6362\u4e3a DataFrame\n factor_data = pd.DataFrame(data).T\n\n print(factor_data)\n\n # \u4e4b\u540e\u8bf7\u6309\u7167 DataFrame \u7684\u65b9\u6cd5\u8f6c\u6362\u6210\u6ee1\u8db3\u683c\u5f0f\u8981\u6c42\u6570\u636e\u683c\u5f0f\n ```\n\n\n",
"bugtrack_url": null,
"license": "Apache License v2",
"summary": "JoinQuant single factor analyzer",
"version": "1.1.0",
"project_urls": {
"Homepage": "https://www.joinquant.com"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "704a269d8b762567494d116d647b8ca1ed4b634002311e74f9c2cf29f38076fa",
"md5": "e5ca431a2875f81aaed6cf37faaa8151",
"sha256": "674f17233bf04135a8de28f73f88526ecbdc53ebb20a12f4832e3e9030eb89a6"
},
"downloads": -1,
"filename": "jqfactor_analyzer-1.1.0-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "e5ca431a2875f81aaed6cf37faaa8151",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": null,
"size": 10113695,
"upload_time": "2025-01-13T07:34:26",
"upload_time_iso_8601": "2025-01-13T07:34:26.374292Z",
"url": "https://files.pythonhosted.org/packages/70/4a/269d8b762567494d116d647b8ca1ed4b634002311e74f9c2cf29f38076fa/jqfactor_analyzer-1.1.0-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-13 07:34:26",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "jqfactor-analyzer"
}