metadata-parser


Namemetadata-parser JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/jvanasco/metadata_parser
SummaryA module to parse metadata out of urls and html documents
upload_time2025-08-30 23:33:28
maintainerNone
docs_urlNone
authorJonathan Vanasco
requires_pythonNone
licenseMIT
keywords opengraph protocol facebook
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            MetadataParser
==============

.. |build_status| image:: https://github.com/jvanasco/metadata_parser/workflows/Python%20package/badge.svg

Build Status: |build_status|

MetadataParser is a Python module for pulling metadata out of web documents.

`BeautifulSoup` is required for parsing.
`Requests` is required for fetching remote documents.
`tldextract` is utilized to parse domains, but can be disabled by setting an
environment variable.

This project has been used in production for many years, and has successfully
parsed billions of documents.


Versioning, Pinning, and Support
================================

This project is using a Semantic Versioning release schedule,
with a {MAJOR}.{MINOR}.{PATCH} format.

Users are advised to pin their installations to "metadata_parser<{MINOR +1}"

For example:

* if the current release is: `1.0.0`
* the advised pin is:  `metadata_parser<1.1.0`

PATCH releases will usually be bug fixes and new features that support backwards
compatibility with Public Methods.  Private Methods are not guaranteed to be
backwards compatible.

MINOR releases are triggered when there is a breaking change to Public Methods.
Once a new MINOR release is triggered, first-party support for the previous MINOR
release is EOL (end of life). PRs for previous releases are welcome, but giving
them proper attention is not guaranteed.

Future deprecations will raise warnings.

By populating the following environment variable, future deprecations will raise exceptions:

    export METADATA_PARSER_FUTURE=1

Installation
=============

pip install metadata_parser


Installation Recommendation
===========================

The ``requests`` library version 2.4.3 or newer is strongly recommended.

This is not required, but it is better.  On earlier versions it is possible to
have an uncaught DecodeError exception when there is an underlying redirect/404.
Recent fixes to ``requests`` improve redirect handling, urllib3 and urllib3
errors.


Features
========

* ``metadata_parser`` pulls as much metadata out of a document as possible
* Developers can set a 'strategy' for finding metadata (i.e. only accept
  opengraph or page attributes)
* Lightweight but functional(!) url validation
* Verbose logging

Logging
=======

This file utilizes extensive logging to help developers pinpoint problems.

* ``log.debug`` (10)
  This log level is mostly used to handle library maintenance and
  troubleshooting, aka "Library Debugging".  Library Debugging is verbose, but
  is nested under ``if __debug__:`` statements, so it is compiled away when
  PYTHONOPTIMIZE is set.
  Several sections of logic useful to developers will also emit logging
  statements at the ``debug`` level, regardless of PYTHONOPTIMIZE.

* ``log.info`` (20)
    This log level is only used during package initialization to notify if
    the ``tldextract`` package is being utilized or not.

* ``log.warning`` (30)
  Currently unused

* ``log.error`` (40)
  This log level will record each URL that a parse is attempted for.

  This log level is mostly used to alert users of errors that were
  encountered during url fetching and document parsing, and often emits a log
  statement just before an Exception is raised. The log statements will contain
  at least the exception type, and may contain the active URL and additional
  debugging information, if any of that information is available.
  
  URLs that trigger error logging should be collected and run on a secondary
  system that utilizes `log.debug` without PYTHONOPTIMIZE.


* ``log.critical`` (50)
  Currently unused


It is STRONGLY recommended to keep Python's logging at ``debug`` and not run
PYTHONOPTIMIZE if you are new to this package.

For experienced users, running under PYTHONOPTIMIZE to not emit debug logging is
designed to make the system run as fast as possible.  The intent of
``log.error`` is to present you with a feed of URLs as they are processed, and
show any errors that arise.  Any issues that arise should then be run on a
second system that enables debug logging to pinpoint the error.  This allows one
to split a deployment into production and R&D/troubleshooting, to maximize
the throughput of the production system.


Optional Integrations
=====================

* ``tldextract``
  This package will attempt to use the package ``tldextract`` for advanced domain
  and hostname analysis. If ``tldextract`` is not wanted, it can be disabled
  with an environment variable.


Environment Variables
=====================

* ``METADATA_PARSER__DISABLE_TLDEXTRACT``
  Default: "0".
  If set to "1", the package will not attempt to load ``tldextract``.

* ``METADATA_PARSER__ENCODING_FALLBACK``
  Default: "ISO-8859-1"
  Used as the fallback when trying to decode a response.

*  ``METADATA_PARSER__DUMMY_URL``
   Used as the fallback URL when calculating url data.


Notes
=====

1. This package requires BeautifulSoup 4.
2. For speed, it will instantiate a BeautifulSoup parser with lxml, and
   fallback to 'None' (the internal pure Python) if it can not load lxml.
3. URL Validation is not RFC compliant, but tries to be "Real World" compliant.

It is HIGHLY recommended that you install lxml for usage.
lxml is considerably faster.
Considerably faster.

Developers should also use a very recent version of lxml.
segfaults have been reported on lxml versions < 2.3.x;
Using at least the most recent 3.x versions is strongly recommended

The default 'strategy' is to look in this order::

    meta,page,og,dc,

Which stands for the following::

    og = OpenGraph
    dc = DublinCore
    meta = metadata
    page = page elements

Developers can specify a strategy as a comma-separated list of the above.

The only 2 page elements currently supported are::

    <title>VALUE</title> -> metadata['page']['title']
    <link rel="canonical" href="VALUE"> -> metadata['page']['link']

'metadata' elements are supported by ``name`` and ``property``.

The MetadataParser object also wraps some convenience functions, which can be
used otherwise , that are designed to turn alleged urls into well formed urls.

For example, you may pull a page::

    http://www.example.com/path/to/file.html

and that file indicates a canonical url which is simple "/file.html".

This package will try to 'remount' the canonical url to the absolute url of
"http://www.example.com/file.html".
Tt will return None if the end result is not a valid url.

This all happens under-the-hood, and is honestly really useful when dealing
with indexers and spiders.


URL Validation
==============

"Real World" URL validation is enabled by default.  This is not RFC compliant.

There are a few gaps in the RFCs that allow for "odd behavior".
Just about any use-case for this package will desire/expect rules that parse
URLs "in the wild", not theoretical.

The differences:

* If an entirely numeric ip address is encountered, it is assumed to be a
  dot-notation IPV4 and it is checked to have the right amount of valid octets.
  
  The default behavior is to invalidate these hosts::

        http://256.256.256.256
        http://999.999.999.999.999

  According to RFCs those are valid hostnames that would fail as "IP Addresses"
  but pass as "Domain Names".  However in the real world, one would never
  encounter domain names like those.

* The only non-domain hostname that is allowed, is "localhost"

  The default behavior is to invalidate these hosts ::

        http://example
        http://examplecom

  Those are considered to be valid hosts, and might exist on a local network or
  custom hosts file.  However, they are not part of the public internet.

Although this behavior breaks RFCs, it greatly reduces the number of
"False Positives" generated when analyzing internet pages. If you want to
include bad data, you can submit a kwarg to ``MetadataParser.__init__``


Handling Bad URLs and Encoded URIs
==================================

This library tries to safeguard against a few common situations.

Encoded URIs and relative urls
------------------------------

Most website publishers will define an image as a URL::

    <meta property="og:image" content="http://example.com/image.jpg" />

Some will define an image as an encoded URI::

    <meta property="og:image" content="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNM+Q8AAc0BZX6f84gAAAAASUVORK5CYII=" />

By default, the ``get_metadata_link()`` method can be used to ensure a valid link
is extracted from the metadata payload::

    >>> import metadata_parser
    >>> page = metadata_parser.MetadataParser(url="http://www.example.com")
    >>> print(page.get_metadata_link('image'))

This method accepts a kwarg ``allow_encoded_uri`` (default False) which will
return the image without further processing::

    >>> print(page.get_metadata_link('image', allow_encoded_uri=True))

Similarly, if a url is local::

    <meta property="og:image" content="/image.jpg" />

The ``get_metadata_link`` method will automatically upgrade it onto the domain::

    >>> print(page.get_metadata_link('image'))
    http://example.com/image.jpg

Poorly Constructed Canonical URLs
---------------------------------

Many website publishers implement canonical URLs incorrectly.
This package tries to fix that.

By default ``MetadataParser`` is constructed with ``require_public_netloc=True``
and ``allow_localhosts=True``.

This will require somewhat valid 'public' network locations in the url.

For example, these will all be valid URLs::

    http://example.com
    http://1.2.3.4
    http://localhost
    http://127.0.0.1
    http://0.0.0.0

If these known 'localhost' urls are not wanted, they can be filtered out with
``allow_localhosts=False``::

    http://localhost
    http://127.0.0.1
    http://0.0.0.0

There are two convenience methods that can be used to get a canonical url or
calculate the effective url::

* MetadataParser.get_discrete_url
* MetadataParser.get_metadata_link

These both accept an argument ``require_public_global``, which defaults to ``True``.

Assuming we have the following content on the url ``http://example.com/path/to/foo``::

    <link rel="canonical" href="http://localhost:8000/alt-path/to/foo">

By default, versions 0.9.0 and later will detect 'localhost:8000' as an
improper canonical url, and remount the local part "/alt-path/to/foo" onto the
domain that served the file.  The vast majority of times this 'behavior'
has been encountered, this is the intended canonical::

    print(page.get_discrete_url())
    >>> http://example.com/alt-path/to/foo

In contrast, versions 0.8.3 and earlier will not catch this situation::

    print(page.get_discrete_url())
    >>> http://localhost:8000/alt-path/to/foo

In order to preserve the earlier behavior, just submit ``require_public_global=False``::

    print(page.get_discrete_url(require_public_global=False))
    >>> http://localhost:8000/alt-path/to/foo


Handling Bad Data
=================

Many CMS systems (and developers) create malformed content or incorrect
document identifiers.  When this happens, the BeautifulSoup parser will lose
data or move it into an unexpected place.

There are two arguments that can help you analyze this data:

* force_doctype::

    ``MetadataParser(..., force_doctype=True, ...)``

``force_doctype=True`` will try to replace the identified doctype with "html"
via regex.  This will often make the input data usable by BS4.

* search_head_only::

    ``MetadataParser(..., search_head_only=False, ...)``

``search_head_only=False`` will not limit the search path to the "<head>" element.
This will have a slight performance hit and will incorporate data from CMS/User
content, not just templates/Site-Operators.


WARNING
=============

Please pin your releases.


Usage
=====

Until version ``0.9.19``, the recommended way to get metadata was to use
``get_metadata`` which will return a string (or None):

**From an URL**::

    >>> import metadata_parser
    >>> page = metadata_parser.MetadataParser(url="http://www.example.com")
    >>> print(page.metadata)
    >>> print(page.get_metadatas('title'))
    >>> print(page.get_metadatas('title', strategy=['og',]))
    >>> print(page.get_metadatas('title', strategy=['page', 'og', 'dc',]))

**From HTML**::

    >>> HTML = """<here>"""
    >>> page = metadata_parser.MetadataParser(html=HTML)
    >>> print(page.metadata)
    >>> print(page.get_metadatas('title'))
    >>> print(page.get_metadatas('title', strategy=['og',]))
    >>> print(page.get_metadatas('title', strategy=['page', 'og', 'dc',]))


Malformed Data
==============

It is very common to find malformed data. As of version ``0.9.20`` the following
methods should be used to allow malformed presentation::

    >>> page = metadata_parser.MetadataParser(html=HTML, support_malformed=True)

or::

    >>> parsed = page.parse(html=html, support_malformed=True)
    >>> parsed = page.parse(html=html, support_malformed=False)

The above options will support parsing common malformed options.  Currently
this only looks at alternate (improper) ways of producing twitter tags, but may
be expanded.

Notes
=====

when building on Python3, a ``static`` toplevel directory may be needed

This library was originally based on Erik River's
`opengraph module <https://github.com/erikriver/opengraph>`_. Something more
aggressive than Erik's module was needed, so this project was started.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/jvanasco/metadata_parser",
    "name": "metadata-parser",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "opengraph protocol facebook",
    "author": "Jonathan Vanasco",
    "author_email": "jonathan@findmeon.com",
    "download_url": "https://files.pythonhosted.org/packages/cd/b7/90a8f8eb3db646f91e5993d08f11660731467ef839d2cb542e8a9404109a/metadata_parser-1.0.0.tar.gz",
    "platform": null,
    "description": "MetadataParser\n==============\n\n.. |build_status| image:: https://github.com/jvanasco/metadata_parser/workflows/Python%20package/badge.svg\n\nBuild Status: |build_status|\n\nMetadataParser is a Python module for pulling metadata out of web documents.\n\n`BeautifulSoup` is required for parsing.\n`Requests` is required for fetching remote documents.\n`tldextract` is utilized to parse domains, but can be disabled by setting an\nenvironment variable.\n\nThis project has been used in production for many years, and has successfully\nparsed billions of documents.\n\n\nVersioning, Pinning, and Support\n================================\n\nThis project is using a Semantic Versioning release schedule,\nwith a {MAJOR}.{MINOR}.{PATCH} format.\n\nUsers are advised to pin their installations to \"metadata_parser<{MINOR +1}\"\n\nFor example:\n\n* if the current release is: `1.0.0`\n* the advised pin is:  `metadata_parser<1.1.0`\n\nPATCH releases will usually be bug fixes and new features that support backwards\ncompatibility with Public Methods.  Private Methods are not guaranteed to be\nbackwards compatible.\n\nMINOR releases are triggered when there is a breaking change to Public Methods.\nOnce a new MINOR release is triggered, first-party support for the previous MINOR\nrelease is EOL (end of life). PRs for previous releases are welcome, but giving\nthem proper attention is not guaranteed.\n\nFuture deprecations will raise warnings.\n\nBy populating the following environment variable, future deprecations will raise exceptions:\n\n    export METADATA_PARSER_FUTURE=1\n\nInstallation\n=============\n\npip install metadata_parser\n\n\nInstallation Recommendation\n===========================\n\nThe ``requests`` library version 2.4.3 or newer is strongly recommended.\n\nThis is not required, but it is better.  On earlier versions it is possible to\nhave an uncaught DecodeError exception when there is an underlying redirect/404.\nRecent fixes to ``requests`` improve redirect handling, urllib3 and urllib3\nerrors.\n\n\nFeatures\n========\n\n* ``metadata_parser`` pulls as much metadata out of a document as possible\n* Developers can set a 'strategy' for finding metadata (i.e. only accept\n  opengraph or page attributes)\n* Lightweight but functional(!) url validation\n* Verbose logging\n\nLogging\n=======\n\nThis file utilizes extensive logging to help developers pinpoint problems.\n\n* ``log.debug`` (10)\n  This log level is mostly used to handle library maintenance and\n  troubleshooting, aka \"Library Debugging\".  Library Debugging is verbose, but\n  is nested under ``if __debug__:`` statements, so it is compiled away when\n  PYTHONOPTIMIZE is set.\n  Several sections of logic useful to developers will also emit logging\n  statements at the ``debug`` level, regardless of PYTHONOPTIMIZE.\n\n* ``log.info`` (20)\n    This log level is only used during package initialization to notify if\n    the ``tldextract`` package is being utilized or not.\n\n* ``log.warning`` (30)\n  Currently unused\n\n* ``log.error`` (40)\n  This log level will record each URL that a parse is attempted for.\n\n  This log level is mostly used to alert users of errors that were\n  encountered during url fetching and document parsing, and often emits a log\n  statement just before an Exception is raised. The log statements will contain\n  at least the exception type, and may contain the active URL and additional\n  debugging information, if any of that information is available.\n  \n  URLs that trigger error logging should be collected and run on a secondary\n  system that utilizes `log.debug` without PYTHONOPTIMIZE.\n\n\n* ``log.critical`` (50)\n  Currently unused\n\n\nIt is STRONGLY recommended to keep Python's logging at ``debug`` and not run\nPYTHONOPTIMIZE if you are new to this package.\n\nFor experienced users, running under PYTHONOPTIMIZE to not emit debug logging is\ndesigned to make the system run as fast as possible.  The intent of\n``log.error`` is to present you with a feed of URLs as they are processed, and\nshow any errors that arise.  Any issues that arise should then be run on a\nsecond system that enables debug logging to pinpoint the error.  This allows one\nto split a deployment into production and R&D/troubleshooting, to maximize\nthe throughput of the production system.\n\n\nOptional Integrations\n=====================\n\n* ``tldextract``\n  This package will attempt to use the package ``tldextract`` for advanced domain\n  and hostname analysis. If ``tldextract`` is not wanted, it can be disabled\n  with an environment variable.\n\n\nEnvironment Variables\n=====================\n\n* ``METADATA_PARSER__DISABLE_TLDEXTRACT``\n  Default: \"0\".\n  If set to \"1\", the package will not attempt to load ``tldextract``.\n\n* ``METADATA_PARSER__ENCODING_FALLBACK``\n  Default: \"ISO-8859-1\"\n  Used as the fallback when trying to decode a response.\n\n*  ``METADATA_PARSER__DUMMY_URL``\n   Used as the fallback URL when calculating url data.\n\n\nNotes\n=====\n\n1. This package requires BeautifulSoup 4.\n2. For speed, it will instantiate a BeautifulSoup parser with lxml, and\n   fallback to 'None' (the internal pure Python) if it can not load lxml.\n3. URL Validation is not RFC compliant, but tries to be \"Real World\" compliant.\n\nIt is HIGHLY recommended that you install lxml for usage.\nlxml is considerably faster.\nConsiderably faster.\n\nDevelopers should also use a very recent version of lxml.\nsegfaults have been reported on lxml versions < 2.3.x;\nUsing at least the most recent 3.x versions is strongly recommended\n\nThe default 'strategy' is to look in this order::\n\n    meta,page,og,dc,\n\nWhich stands for the following::\n\n    og = OpenGraph\n    dc = DublinCore\n    meta = metadata\n    page = page elements\n\nDevelopers can specify a strategy as a comma-separated list of the above.\n\nThe only 2 page elements currently supported are::\n\n    <title>VALUE</title> -> metadata['page']['title']\n    <link rel=\"canonical\" href=\"VALUE\"> -> metadata['page']['link']\n\n'metadata' elements are supported by ``name`` and ``property``.\n\nThe MetadataParser object also wraps some convenience functions, which can be\nused otherwise , that are designed to turn alleged urls into well formed urls.\n\nFor example, you may pull a page::\n\n    http://www.example.com/path/to/file.html\n\nand that file indicates a canonical url which is simple \"/file.html\".\n\nThis package will try to 'remount' the canonical url to the absolute url of\n\"http://www.example.com/file.html\".\nTt will return None if the end result is not a valid url.\n\nThis all happens under-the-hood, and is honestly really useful when dealing\nwith indexers and spiders.\n\n\nURL Validation\n==============\n\n\"Real World\" URL validation is enabled by default.  This is not RFC compliant.\n\nThere are a few gaps in the RFCs that allow for \"odd behavior\".\nJust about any use-case for this package will desire/expect rules that parse\nURLs \"in the wild\", not theoretical.\n\nThe differences:\n\n* If an entirely numeric ip address is encountered, it is assumed to be a\n  dot-notation IPV4 and it is checked to have the right amount of valid octets.\n  \n  The default behavior is to invalidate these hosts::\n\n        http://256.256.256.256\n        http://999.999.999.999.999\n\n  According to RFCs those are valid hostnames that would fail as \"IP Addresses\"\n  but pass as \"Domain Names\".  However in the real world, one would never\n  encounter domain names like those.\n\n* The only non-domain hostname that is allowed, is \"localhost\"\n\n  The default behavior is to invalidate these hosts ::\n\n        http://example\n        http://examplecom\n\n  Those are considered to be valid hosts, and might exist on a local network or\n  custom hosts file.  However, they are not part of the public internet.\n\nAlthough this behavior breaks RFCs, it greatly reduces the number of\n\"False Positives\" generated when analyzing internet pages. If you want to\ninclude bad data, you can submit a kwarg to ``MetadataParser.__init__``\n\n\nHandling Bad URLs and Encoded URIs\n==================================\n\nThis library tries to safeguard against a few common situations.\n\nEncoded URIs and relative urls\n------------------------------\n\nMost website publishers will define an image as a URL::\n\n    <meta property=\"og:image\" content=\"http://example.com/image.jpg\" />\n\nSome will define an image as an encoded URI::\n\n    <meta property=\"og:image\" content=\"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNM+Q8AAc0BZX6f84gAAAAASUVORK5CYII=\" />\n\nBy default, the ``get_metadata_link()`` method can be used to ensure a valid link\nis extracted from the metadata payload::\n\n    >>> import metadata_parser\n    >>> page = metadata_parser.MetadataParser(url=\"http://www.example.com\")\n    >>> print(page.get_metadata_link('image'))\n\nThis method accepts a kwarg ``allow_encoded_uri`` (default False) which will\nreturn the image without further processing::\n\n    >>> print(page.get_metadata_link('image', allow_encoded_uri=True))\n\nSimilarly, if a url is local::\n\n    <meta property=\"og:image\" content=\"/image.jpg\" />\n\nThe ``get_metadata_link`` method will automatically upgrade it onto the domain::\n\n    >>> print(page.get_metadata_link('image'))\n    http://example.com/image.jpg\n\nPoorly Constructed Canonical URLs\n---------------------------------\n\nMany website publishers implement canonical URLs incorrectly.\nThis package tries to fix that.\n\nBy default ``MetadataParser`` is constructed with ``require_public_netloc=True``\nand ``allow_localhosts=True``.\n\nThis will require somewhat valid 'public' network locations in the url.\n\nFor example, these will all be valid URLs::\n\n    http://example.com\n    http://1.2.3.4\n    http://localhost\n    http://127.0.0.1\n    http://0.0.0.0\n\nIf these known 'localhost' urls are not wanted, they can be filtered out with\n``allow_localhosts=False``::\n\n    http://localhost\n    http://127.0.0.1\n    http://0.0.0.0\n\nThere are two convenience methods that can be used to get a canonical url or\ncalculate the effective url::\n\n* MetadataParser.get_discrete_url\n* MetadataParser.get_metadata_link\n\nThese both accept an argument ``require_public_global``, which defaults to ``True``.\n\nAssuming we have the following content on the url ``http://example.com/path/to/foo``::\n\n    <link rel=\"canonical\" href=\"http://localhost:8000/alt-path/to/foo\">\n\nBy default, versions 0.9.0 and later will detect 'localhost:8000' as an\nimproper canonical url, and remount the local part \"/alt-path/to/foo\" onto the\ndomain that served the file.  The vast majority of times this 'behavior'\nhas been encountered, this is the intended canonical::\n\n    print(page.get_discrete_url())\n    >>> http://example.com/alt-path/to/foo\n\nIn contrast, versions 0.8.3 and earlier will not catch this situation::\n\n    print(page.get_discrete_url())\n    >>> http://localhost:8000/alt-path/to/foo\n\nIn order to preserve the earlier behavior, just submit ``require_public_global=False``::\n\n    print(page.get_discrete_url(require_public_global=False))\n    >>> http://localhost:8000/alt-path/to/foo\n\n\nHandling Bad Data\n=================\n\nMany CMS systems (and developers) create malformed content or incorrect\ndocument identifiers.  When this happens, the BeautifulSoup parser will lose\ndata or move it into an unexpected place.\n\nThere are two arguments that can help you analyze this data:\n\n* force_doctype::\n\n    ``MetadataParser(..., force_doctype=True, ...)``\n\n``force_doctype=True`` will try to replace the identified doctype with \"html\"\nvia regex.  This will often make the input data usable by BS4.\n\n* search_head_only::\n\n    ``MetadataParser(..., search_head_only=False, ...)``\n\n``search_head_only=False`` will not limit the search path to the \"<head>\" element.\nThis will have a slight performance hit and will incorporate data from CMS/User\ncontent, not just templates/Site-Operators.\n\n\nWARNING\n=============\n\nPlease pin your releases.\n\n\nUsage\n=====\n\nUntil version ``0.9.19``, the recommended way to get metadata was to use\n``get_metadata`` which will return a string (or None):\n\n**From an URL**::\n\n    >>> import metadata_parser\n    >>> page = metadata_parser.MetadataParser(url=\"http://www.example.com\")\n    >>> print(page.metadata)\n    >>> print(page.get_metadatas('title'))\n    >>> print(page.get_metadatas('title', strategy=['og',]))\n    >>> print(page.get_metadatas('title', strategy=['page', 'og', 'dc',]))\n\n**From HTML**::\n\n    >>> HTML = \"\"\"<here>\"\"\"\n    >>> page = metadata_parser.MetadataParser(html=HTML)\n    >>> print(page.metadata)\n    >>> print(page.get_metadatas('title'))\n    >>> print(page.get_metadatas('title', strategy=['og',]))\n    >>> print(page.get_metadatas('title', strategy=['page', 'og', 'dc',]))\n\n\nMalformed Data\n==============\n\nIt is very common to find malformed data. As of version ``0.9.20`` the following\nmethods should be used to allow malformed presentation::\n\n    >>> page = metadata_parser.MetadataParser(html=HTML, support_malformed=True)\n\nor::\n\n    >>> parsed = page.parse(html=html, support_malformed=True)\n    >>> parsed = page.parse(html=html, support_malformed=False)\n\nThe above options will support parsing common malformed options.  Currently\nthis only looks at alternate (improper) ways of producing twitter tags, but may\nbe expanded.\n\nNotes\n=====\n\nwhen building on Python3, a ``static`` toplevel directory may be needed\n\nThis library was originally based on Erik River's\n`opengraph module <https://github.com/erikriver/opengraph>`_. Something more\naggressive than Erik's module was needed, so this project was started.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A module to parse metadata out of urls and html documents",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/jvanasco/metadata_parser"
    },
    "split_keywords": [
        "opengraph",
        "protocol",
        "facebook"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "cdb790a8f8eb3db646f91e5993d08f11660731467ef839d2cb542e8a9404109a",
                "md5": "c094c9e3cf86105df0d584f79188e166",
                "sha256": "d477eacb9ede78b8545086fbbb6dedce2ef05a75e5acfd0397b8445c495f06e0"
            },
            "downloads": -1,
            "filename": "metadata_parser-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "c094c9e3cf86105df0d584f79188e166",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 60455,
            "upload_time": "2025-08-30T23:33:28",
            "upload_time_iso_8601": "2025-08-30T23:33:28.861938Z",
            "url": "https://files.pythonhosted.org/packages/cd/b7/90a8f8eb3db646f91e5993d08f11660731467ef839d2cb542e8a9404109a/metadata_parser-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-30 23:33:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jvanasco",
    "github_project": "metadata_parser",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "metadata-parser"
}
        
Elapsed time: 1.56915s