Im having a problem with removing non-utf8 characters from string, which are not displaying properly. Characters are like this 0x97 0x61 0x6C 0x6F (hex representation)
What is the best way to remove them? Regular expression or something else ?
Im having a problem with removing non-utf8 characters from string, which are not displaying properly. Characters are like this 0x97 0x61 0x6C 0x6F (hex representation)
What is the best way to remove them? Regular expression or something else ?
This will work only for non-nested parentheses:
$regex = <<<HERE
/ " ( (?:[^"\\]++|\\.)*+ ) "
| ' ( (?:[^'\\]++|\\.)*+ ) '
| ( ( [^)]* ) )
| [s,]+
/x
HERE;
$tags = preg_split($regex, $str, -1,
PREG_SPLIT_NO_EMPTY
| PREG_SPLIT_DELIM_CAPTURE);
The ++
and *+
will consume as much as they can and give nothing back for backtracking. This technique is described in perlre(1) as the most efficient way to do this kind of matching.
The standard disclaimer applies: Parsing HTML with regular expressions is not ideal. Success depends on the well-formedness of the input on a character-by-character level. If you cannot guarantee this, the regex will fail to do the Right Thing at some point.
Having said that:
<ab[^>]*>(.*?)</a> // match group one will contain the link text
I'm guessing that the source of the URL is more at fault. Perhaps you're fixing the wrong problem? Removing "strange" characters from a URI might give it an entirely different meaning.
With that said, you may be able to remove all of the non-ASCII characters with a simple string replacement:
String fixed = original.replaceAll("[^\x20-\x7e]", "");
Or you can extend that to all non-four-byte-UTF-8 characters if that doesn't cover the "�" character:
String fixed = original.replaceAll("[^\u0000-\uFFFF]", "");
In general, to remove non-ascii characters, use str.encode
with errors='ignore':
df['col'] = df['col'].str.encode('ascii', 'ignore').str.decode('ascii')
To perform this on multiple string columns, use
u = df.select_dtypes(object)
df[u.columns] = u.apply(
lambda x: x.str.encode('ascii', 'ignore').str.decode('ascii'))
Although that still won't handle the null characters in your columns. For that, you replace them using regex:
df2 = df.replace(r'W+', '', regex=True)
Using a regex approach:
It searches for UTF-8 sequences, and captures those into group 1. It also matches single bytes that could not be identified as part of a UTF-8 sequence, but does not capture those. Replacement is whatever was captured into group 1. This effectively removes all invalid bytes.
It is possible to repair the string, by encoding the invalid bytes as UTF-8 characters. But if the errors are random, this could leave some strange symbols.
EDIT:
!empty(x)
will match non-empty values ("0"
is considered empty).x != ""
will match non-empty values, including"0"
.x !== ""
will match anything except""
.x != ""
seem the best one to use in this case.I have also sped up the match a little. Instead of matching each character separately, it matches sequences of valid UTF-8 characters.