Python code to remove HTML tags from a string
Python code to remove HTML tags from a string
Using a regex
Using a regex, you can clean everything inside <>
:
import re
# as per recommendation from @freylis, compile once only
CLEANR = re.compile(<.*?>)
def cleanhtml(raw_html):
cleantext = re.sub(CLEANR, , raw_html)
return cleantext
Some HTML texts can also contain entities that are not enclosed in brackets, such as &nsbm
. If that is the case, then you might want to write the regex as
CLEANR = re.compile(<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});)
This link contains more details on this.
Using BeautifulSoup
You could also use BeautifulSoup
additional package to find out all the raw text.
You will need to explicitly set a parser when calling BeautifulSoup
I recommend lxml
as mentioned in alternative answers (much more robust than the default one (html.parser
) (i.e. available without additional install).
from bs4 import BeautifulSoup
cleantext = BeautifulSoup(raw_html, lxml).text
But it doesnt prevent you from using external libraries, so I recommend the first solution.
EDIT: To use lxml
you need to pip install lxml
.
Python has several XML modules built in. The simplest one for the case that you already have a string with the full HTML is xml.etree
, which works (somewhat) similarly to the lxml example you mention:
def remove_tags(text):
return .join(xml.etree.ElementTree.fromstring(text).itertext())
Python code to remove HTML tags from a string
Note that this isnt perfect, since if you had something like, say, <a title=>>
it would break. However, its about the closest youd get in non-library Python without a really complex function:
import re
TAG_RE = re.compile(r<[^>]+>)
def remove_tags(text):
return TAG_RE.sub(, text)
However, as lvc mentions xml.etree
is available in the Python Standard Library, so you could probably just adapt it to serve like your existing lxml
version:
def remove_tags(text):
return .join(xml.etree.ElementTree.fromstring(text).itertext())