Discussion:
[squid-users] redirect based on url (302)
uppsalanet
2018-09-21 14:43:54 UTC
Permalink
Hi,
We use squid to limit web traffic to a few internal sites, the computers are
in in public areas. That works good. Now I have a new case:

If a user goes to page "https://browzine.com" and choose to view a magazine
they get redirected (302) to an other site. I would like to open for that
redirect if it's "https://browzine.com" (api.thirdiron.com) who does the
redirect.

Ex:
https://browzine.com -> http://api.thirdiron.com ->
/https://www.sciencedirect.com (this last redirect differ alot based on
magazine provider)/
Header:
*General*
Request URL:
http://api.thirdiron.com/v2/libraries/223/articles/203497919/content
Request Method: GET
*Status Code: 302 Found*
Remote Address: 54.221.220.6:80
Referrer Policy: no-referrer-when-downgrade

* Response Header*
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Allow-Methods: DELETE,GET,PATCH,POST,PUT
Access-Control-Allow-Origin: *
Connection: keep-alive
Date: Fri, 21 Sep 2018 13:36:18 GMT
*Location:
https://www.sciencedirect.com/science/article/pii/S2212671612001655*
Server: Cowboy
Transfer-Encoding: chunked
Via: 1.1 vegur
X-Powered-By: Express

Brgd
Fredrik




--
Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
Amos Jeffries
2018-09-21 16:35:25 UTC
Permalink
Post by uppsalanet
Hi,
We use squid to limit web traffic to a few internal sites, the computers are
If a user goes to page "https://browzine.com" and choose to view a magazine
they get redirected (302) to an other site. I would like to open for that
redirect if it's "https://browzine.com" (api.thirdiron.com) who does the
redirect.
Can you explain that differently please?


Amos
uppsalanet
2018-09-24 06:38:39 UTC
Permalink
Hi Amos,
Today I have a conf like this:
....
acl *LIB_domains* dstdomain .almedalsbiblioteket.se .alvin-portal.org
.bibliotekuppsala.se
http_access allow *LIB_domains*
....

Now I also need to open for *.browzine.com*. The problem with
*.browzine.com* is that it is a portal with many links to other sites. So I
basically need to open up and maintain 400 sites in a squid ACL.

I would like to take another approach then (but I don't know if it's
possible):
I know that browzine.com will reply 302 when trying to access a link on
their site. *So I would like to accept all redirect (302) sites from
browzine.com*.

Hope that clarify and thanks in advance
Fredrik



--
Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
Amos Jeffries
2018-09-24 09:30:39 UTC
Permalink
Post by uppsalanet
Hi Amos,
....
acl *LIB_domains* dstdomain .almedalsbiblioteket.se .alvin-portal.org
.bibliotekuppsala.se
http_access allow *LIB_domains*
....
Now I also need to open for *.browzine.com*. The problem with
*.browzine.com* is that it is a portal with many links to other sites. So I
basically need to open up and maintain 400 sites in a squid ACL.
I would like to take another approach then (but I don't know if it's
I know that browzine.com will reply 302 when trying to access a link on
their site. *So I would like to accept all redirect (302) sites from
browzine.com*.
Aha, that is clearer. Thank you.

I think you can possibly achieve this, but *only* because of those 302
existing. If the site were just a collection of links it would be very
much more difficult.


What I am thinking of is to use a custom external ACL script that
creates a temporary browsing session for a client when the 302 arrives
then the SQL session helper to allow matching traffic through for the
followup request from that client.

You will need a database with a table created like this:

CREATE TABLE sessions (
id VARCHAR(256) NOT NULL PRIMARY KEY,
enabled DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
)

You need to write a script which receives an IP and a URL from Squid,
extracts the domain name from the URL, then adds a string "$ip $domain"
to that table as the id column, then returns the "OK" result to Squid.

The page at
<http://www.squid-cache.org/Versions/v4/manuals/ext_sql_session_acl.html> has
details of the SQL session helper that uses that table to check for
whitelisted domains.


Your config would look like:

acl 302 http_status 302
acl browzine dstdomain .browzine.com

external_acl_type whitelist_add %SRC %{Location} \
/path/to/whitelist_script

acl add_to_whitelist external whitelist_add

http_reply_access allow browzine 302 add_to_whitelist
http_reply_access allow all


external_acl_type whitelist ttl=60 %SRC %DST \
/usr/lib/squid/ext_session_db_acl \
--dsn ... --user ... --password ... \
--table sessions --cond ""

acl whitelisted external whitelist
http_access allow whitelisted


To have sessions expire simply remove them from the database table.
Squid will start rejecting traffic there within 60 seconds of the removal.

HTH
Amos
Eliezer Croitoru
2018-10-06 19:41:03 UTC
Permalink
Amos,

Would an ICAP service that sits on the RESPMOD vector would be a better
solution other then opening a new session?

Thanks,
Eliezer
Post by Amos Jeffries
Post by uppsalanet
Hi Amos,
....
acl *LIB_domains* dstdomain .almedalsbiblioteket.se .alvin-portal.org
.bibliotekuppsala.se
http_access allow *LIB_domains*
....
Now I also need to open for *.browzine.com*. The problem with
*.browzine.com* is that it is a portal with many links to other sites. So I
basically need to open up and maintain 400 sites in a squid ACL.
I would like to take another approach then (but I don't know if it's
I know that browzine.com will reply 302 when trying to access a link on
their site. *So I would like to accept all redirect (302) sites from
browzine.com*.
Aha, that is clearer. Thank you.
I think you can possibly achieve this, but *only* because of those 302
existing. If the site were just a collection of links it would be very
much more difficult.
What I am thinking of is to use a custom external ACL script that
creates a temporary browsing session for a client when the 302 arrives
then the SQL session helper to allow matching traffic through for the
followup request from that client.
CREATE TABLE sessions (
id VARCHAR(256) NOT NULL PRIMARY KEY,
enabled DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
)
You need to write a script which receives an IP and a URL from Squid,
extracts the domain name from the URL, then adds a string "$ip $domain"
to that table as the id column, then returns the "OK" result to Squid.
The page at
<http://www.squid-cache.org/Versions/v4/manuals/ext_sql_session_acl.html> has
details of the SQL session helper that uses that table to check for
whitelisted domains.
acl 302 http_status 302
acl browzine dstdomain .browzine.com
external_acl_type whitelist_add %SRC %{Location} \
/path/to/whitelist_script
acl add_to_whitelist external whitelist_add
http_reply_access allow browzine 302 add_to_whitelist
http_reply_access allow all
external_acl_type whitelist ttl=60 %SRC %DST \
/usr/lib/squid/ext_session_db_acl \
--dsn ... --user ... --password ... \
--table sessions --cond ""
acl whitelisted external whitelist
http_access allow whitelisted
To have sessions expire simply remove them from the database table.
Squid will start rejecting traffic there within 60 seconds of the removal.
HTH
Amos
_______________________________________________
squid-users mailing list
http://lists.squid-cache.org/listinfo/squid-users
--
----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: ***@ngtech.co.il
Amos Jeffries
2018-10-06 22:44:45 UTC
Permalink
Post by Eliezer Croitoru
Amos,
Would an ICAP service that sits on the RESPMOD vector would be a better
solution other then opening a new session?
"Opening a new session" is what any such ICAP would have to do. It is
also overkill for that small action.

Amos
Eliezer Croitoru
2018-10-07 15:37:45 UTC
Permalink
Hey Amos,

I still believe that if squid will manage the connections and the ICAP
service will maintain the ACL list based on these 302
it would be much faster then opening new connections to the WWW.
If bandwidth and CPU or other resources are not an issue and all the
requests will receive only and only public domains..
then I would agree that an ICAP service is not required.

An external_acl is a good way and I believe that a proper DB should be
used.
From what I remember the last time I have used sqlite3 I had an issue
when two helpers accessed the DB at the same time for write.

Eliezer
Post by Amos Jeffries
Post by Eliezer Croitoru
Amos,
Would an ICAP service that sits on the RESPMOD vector would be a better
solution other then opening a new session?
"Opening a new session" is what any such ICAP would have to do. It is
also overkill for that small action.
Amos
--
----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: ***@ngtech.co.il
Amos Jeffries
2018-10-07 22:52:44 UTC
Permalink
Post by Eliezer Croitoru
Hey Amos,
I still believe that if squid will manage the connections and the ICAP
service will maintain the ACL list based on these 302
it would be much faster then opening new connections to the WWW.
Where are you getting this "new connections to the WWW" idea?

My suggestion does not involve any extra connections.

Amos
Eliezer Croitoru
2018-10-08 19:11:49 UTC
Permalink
Amos I probably missed couple lines.
It's doable but probably if there is a specific set of domains or urls
then I will need to try and see what and how it works.

Eliezer
Post by Amos Jeffries
Post by uppsalanet
Hi Amos,
....
acl *LIB_domains* dstdomain .almedalsbiblioteket.se .alvin-portal.org
.bibliotekuppsala.se
http_access allow *LIB_domains*
....
Now I also need to open for *.browzine.com*. The problem with
*.browzine.com* is that it is a portal with many links to other sites. So I
basically need to open up and maintain 400 sites in a squid ACL.
I would like to take another approach then (but I don't know if it's
I know that browzine.com will reply 302 when trying to access a link on
their site. *So I would like to accept all redirect (302) sites from
browzine.com*.
Aha, that is clearer. Thank you.
I think you can possibly achieve this, but *only* because of those 302
existing. If the site were just a collection of links it would be very
much more difficult.
What I am thinking of is to use a custom external ACL script that
creates a temporary browsing session for a client when the 302 arrives
then the SQL session helper to allow matching traffic through for the
followup request from that client.
CREATE TABLE sessions (
id VARCHAR(256) NOT NULL PRIMARY KEY,
enabled DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
)
You need to write a script which receives an IP and a URL from Squid,
extracts the domain name from the URL, then adds a string "$ip $domain"
to that table as the id column, then returns the "OK" result to Squid.
The page at
<http://www.squid-cache.org/Versions/v4/manuals/ext_sql_session_acl.html> has
details of the SQL session helper that uses that table to check for
whitelisted domains.
acl 302 http_status 302
acl browzine dstdomain .browzine.com
external_acl_type whitelist_add %SRC %{Location} \
/path/to/whitelist_script
acl add_to_whitelist external whitelist_add
http_reply_access allow browzine 302 add_to_whitelist
http_reply_access allow all
external_acl_type whitelist ttl=60 %SRC %DST \
/usr/lib/squid/ext_session_db_acl \
--dsn ... --user ... --password ... \
--table sessions --cond ""
acl whitelisted external whitelist
http_access allow whitelisted
To have sessions expire simply remove them from the database table.
Squid will start rejecting traffic there within 60 seconds of the removal.
HTH
Amos
_______________________________________________
squid-users mailing list
http://lists.squid-cache.org/listinfo/squid-users
--
----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: ***@ngtech.co.il
uppsalanet
2018-10-23 14:31:11 UTC
Permalink
Thanks Amos for all you help.
I've done a few of your suggested steps:
* Create the databas.
createdb.sql
<http://squid-web-proxy-cache.1019090.n4.nabble.com/file/t377569/createdb.sql>
* The acl to fill upp the database with values works fine :-)
external_acl_type whitelist_add ttl=10 %SRC %<{Location}
/etc/squid/add2db.pl
add2db.pl
<http://squid-web-proxy-cache.1019090.n4.nabble.com/file/t377569/add2db.pl>

So now i fill up the database with records like this:
dbdump.txt
<http://squid-web-proxy-cache.1019090.n4.nabble.com/file/t377569/dbdump.txt>

My question is how i get the domains out from it? I don't really under stand
this part:
external_acl_type whitelist ttl=60 %SRC %DST \
/usr/lib/squid/ext_session_db_acl \
--dsn ... --user ... --password ... \
--table sessions --cond ""

Do I need to write another script for that
"/usr/lib/squid/ext_session_db_acl"

squid -v
squid_version.txt
<http://squid-web-proxy-cache.1019090.n4.nabble.com/file/t377569/squid_version.txt>

Thanks in advance
Fredrik



--
Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
Amos Jeffries
2018-10-23 15:06:03 UTC
Permalink
Post by uppsalanet
Thanks Amos for all you help.
* Create the databas.
createdb.sql
<http://squid-web-proxy-cache.1019090.n4.nabble.com/file/t377569/createdb.sql>
* The acl to fill upp the database with values works fine :-)
external_acl_type whitelist_add ttl=10 %SRC %<{Location}
/etc/squid/add2db.pl
add2db.pl
<http://squid-web-proxy-cache.1019090.n4.nabble.com/file/t377569/add2db.pl>
dbdump.txt
<http://squid-web-proxy-cache.1019090.n4.nabble.com/file/t377569/dbdump.txt>
My question is how i get the domains out from it? I don't really under stand
external_acl_type whitelist ttl=60 %SRC %DST \
/usr/lib/squid/ext_session_db_acl \
--dsn ... --user ... --password ... \
--table sessions --cond ""
Do I need to write another script for that
"/usr/lib/squid/ext_session_db_acl"
Nope, Squid should have come with that helper. It may not be at that
exact path though.

All you should have to do now is find where that helper binary actually
is and setup those parameters so it can access your DB.

Amos
uppsalanet
2018-10-30 11:49:04 UTC
Permalink
Thanks,
Missed that I need to install squid-helpers "yum install squid-helpers" :-)
Now it's there.

Now I use it like this:

external_acl_type whitelist ttl=60 children-max=1 %SRC %DST
/usr/lib64/squid/ext_sql_session_acl --user root --password config --table
sessions --cond "" --debug

But receive this:
/2018/10/30 12:38:37.279| 82,9| external_acl.cc(600) aclMatchExternal:
acl="whitelist"
2018/10/30 12:38:37.280| 82,9| external_acl.cc(629) aclMatchExternal: No
helper entry available
2018/10/30 12:38:37.280| 82,2| external_acl.cc(663) aclMatchExternal:
whitelist("130.238.171.59 muse.jhu.edu -") = lookup needed
2018/10/30 12:38:37.280| 82,2| external_acl.cc(667) aclMatchExternal:
"130.238.171.59 muse.jhu.edu -": queueing a call.
2018/10/30 12:38:37.280| 82,2| external_acl.cc(1031) Start: fg lookup in
'whitelist' for '130.238.171.59 muse.jhu.edu -'
2018/10/30 12:38:37.280| 82,4| external_acl.cc(1071) Start:
externalAclLookup: looking up for '130.238.171.59 muse.jhu.edu -' in
'whitelist'.
2018/10/30 12:38:37.280| Starting new whitelist helpers...
2018/10/30 12:38:37.282| 82,4| external_acl.cc(1086) Start:
externalAclLookup: will wait for the result of '130.238.171.59 muse.jhu.edu
-' in 'whitelist' (ch=0x26782c8).
2018/10/30 12:38:37.282| 82,2| external_acl.cc(670) aclMatchExternal:
"130.238.171.59 muse.jhu.edu -": return -1.
Received: Channel=, UID=''
Query: SELECT '' as 'user', '' as 'tag' FROM sessions WHERE (id = ?) UID
queried: ''
Rows: 0
2018/10/30 12:38:37.420| 82,2| external_acl.cc(958) externalAclHandleReply:
reply={result=Unknown, other: "ERR message="unknown UID ''""}/

Looking into the code of ext_sql_session_ac and line 190:
*my ($cid, $uid) = ($1, $2);*

I assume this will split the $_into $cid and $uid. But debug says:
*Received: Channel=, UID=''*
Query: SELECT '' as 'user', '' as 'tag' FROM sessions WHERE (id = ?) UID
queried: ''
Rows: 0

Do I have done something wrong?
/Fredrik






--
Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
Amos Jeffries
2018-10-31 02:16:17 UTC
Permalink
Post by Eliezer Croitoru
Thanks,
Missed that I need to install squid-helpers "yum install squid-helpers" :-)
Now it's there.
external_acl_type whitelist ttl=60 children-max=1 %SRC %DST
/usr/lib64/squid/ext_sql_session_acl --user root --password config --table
sessions --cond "" --debug
acl="whitelist"
2018/10/30 12:38:37.280| 82,9| external_acl.cc(629) aclMatchExternal: No
helper entry available
whitelist("130.238.171.59 muse.jhu.edu -") = lookup needed
"130.238.171.59 muse.jhu.edu -": queueing a call.
2018/10/30 12:38:37.280| 82,2| external_acl.cc(1031) Start: fg lookup in
'whitelist' for '130.238.171.59 muse.jhu.edu -'
Oh darn. Sorry, I forgot about the implicit %DATA parameters on external
ACL yet again. One of the things on my long todo list is to make that
optionally ignored.

For now the easiest fix/workaround is to have your custom helper append
that " -" string to the IDs in the database.

Amos
uppsalanet
2018-10-31 10:27:20 UTC
Permalink
Hi Amos,
Is there a git that I can use to push stuff up?

I think you need to split the string in an other way, look into this
example:
#!/usr/bin/perl
use strict;
use warnings;

$|=1;
while (<>) {
my $string = $_;
print "Received '\$_' = ".$_."\n";

$string =~ m/^(\d+)\s(.*)$/;
print "After regexp '\$string' = ".$string."\n";
print "After regexp '\$1' = ".$1."\n";
print "After regexp '\$2' = ".$2."\n";

### Original split from sorce ###
### This doesn't split anything looks like elements of an array?
#my ($cid, $uid) = ($1, $2);

### Split the string ###
### Those two split based on one or more spaces
#my ($cid, $uid) = split(/\s+/ ,$_);
my ($cid, $uid) = split;
$cid =~ s/%(..)/pack("H*", $1)/ge;
$uid =~ s/%(..)/pack("H*", $1)/ge;
print "After split \$cid = ".$cid."\n";
print "After split \$uid = ".$uid."\n";
}

Output from above with intake value '*130.238.000.00 muse.jhu.edu -*':
Received '$_' = 130.238.000.00 muse.jhu.edu -
After regexp '$string' = 130.238.000.00 muse.jhu.edu -
/Use of uninitialized value $1 in concatenation (.) or string at
./sed_test_reg.pl line 13, <> line 1.
After regexp '$1' =
Use of uninitialized value $2 in concatenation (.) or string at
./sed_test_reg.pl line 14, <> line 1.
After regexp '$2' = /
*After split $cid = 130.238.000.00
After split $uid = muse.jhu.edu*

Cheers
Fredrik



--
Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
Amos Jeffries
2018-11-01 04:13:19 UTC
Permalink
Post by uppsalanet
Hi Amos,
Is there a git that I can use to push stuff up?
Do you mean to make a change PR against the official code?

The key details for people wanting to assist with Squid development are
linked from here: <https://wiki.squid-cache.org/DeveloperResources>
Post by uppsalanet
I think you need to split the string in an other way, look into this
#!/usr/bin/perl
use strict;
use warnings;
$|=1;
while (<>) {
my $string = $_;
print "Received '\$_' = ".$_."\n";
$string =~ m/^(\d+)\s(.*)$/;
print "After regexp '\$string' = ".$string."\n";
print "After regexp '\$1' = ".$1."\n";
print "After regexp '\$2' = ".$2."\n";
### Original split from sorce ###
### This doesn't split anything looks like elements of an array?
#my ($cid, $uid) = ($1, $2);
### Split the string ###
### Those two split based on one or more spaces
#my ($cid, $uid) = split(/\s+/ ,$_);
my ($cid, $uid) = split;
$cid =~ s/%(..)/pack("H*", $1)/ge;
$uid =~ s/%(..)/pack("H*", $1)/ge;
print "After split \$cid = ".$cid."\n";
print "After split \$uid = ".$uid."\n";
}
Received '$_' = 130.238.000.00 muse.jhu.edu -
After regexp '$string' = 130.238.000.00 muse.jhu.edu -
/Use of uninitialized value $1 in concatenation (.) or string at
./sed_test_reg.pl line 13, <> line 1.
After regexp '$1' =
Use of uninitialized value $2 in concatenation (.) or string at
./sed_test_reg.pl line 14, <> line 1.
After regexp '$2' = /
*After split $cid = 130.238.000.00
After split $uid = muse.jhu.edu*
$cid should be the concurrency channel ID. Configured with the
"concurrency=N" option to external_acl_type in squid.conf. (Seems I
missed another bit of the config.)

If you are wanting to assist with fixing the helper, it could do with a
change to auto-detect whether the first column is a CID (numeric only)
or not (anything but whitespace following a numeral).


Amos
uppsalanet
2018-11-08 15:52:38 UTC
Permalink
Im stucked again :-(

It stoped working for some reason. I'm not able to trap 302 anymore. This is
my squid.conf (snippet):

##### Ext magazine domains
debug_options 11,10 58,10 82,10
acl 302 http_status 302
acl browzine dstdomain .browzine.com .thirdiron.com
http_access allow browzine

external_acl_type whitelist_add ttl=10 %SRC %<h{Location}
/etc/squid/add2db.pl

acl add_to_whitelist external whitelist_add
http_reply_access allow browzine 302 add_to_whitelist
http_reply_access allow all
##### Ext magazine domains

This is what I get from curl on the same server:
curl -I
https://api.thirdiron.com/v2/libraries/223/articles/201309075/content
HTTP/1.1 302 Found
Server: Cowboy
Connection: keep-alive
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Allow-Methods: DELETE,GET,PATCH,POST,PUT
Location: http://www.tandfonline.com/doi/full/10.1080/00020184.2018.1459287
Set-Cookie:
connect.sid=s%3AygAG53nVxrcphMYobmgFN4WIHWa2dgv0.29L5g8MvGC6Awk3pE5JZ4xKYcSqyI3L7vAiUXbAUmHM;
Path=/; HttpOnly
Date: Thu, 08 Nov 2018 15:39:10 GMT
Via: 1.1 vegur

I probably doing something wrong :-)
Regards
Fredrik





--
Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
uppsalanet
2018-11-16 08:22:18 UTC
Permalink
Just for documentation purpose. Amos suggestion works perfect:
/##### Ext magazine domains
debug_options 11,10 58,10 82,10
acl 302 http_status 302
acl browzine dstdomain .browzine.com .thirdiron.com
http_access allow browzine

external_acl_type whitelist_add ttl=10 %SRC %<h{Location}
/etc/squid/add2db.pl

acl add_to_whitelist external whitelist_add
http_reply_access allow browzine 302 add_to_whitelist
http_reply_access allow all
##### Ext magazine domains &lt;/i>

Why it's not working for me is that the site Im reaching have turned on
https encryption. TLS encrypted tunnel prevents me from seeing HTTP headers,
which means I cannot distinguish individual responses :-(

/F





--
Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
Amos Jeffries
2018-11-16 09:57:24 UTC
Permalink
Post by uppsalanet
/##### Ext magazine domains
debug_options 11,10 58,10 82,10
acl 302 http_status 302
acl browzine dstdomain .browzine.com .thirdiron.com
http_access allow browzine
external_acl_type whitelist_add ttl=10 %SRC %<h{Location}
/etc/squid/add2db.pl
acl add_to_whitelist external whitelist_add
http_reply_access allow browzine 302 add_to_whitelist
http_reply_access allow all
##### Ext magazine domains &lt;/i>
Why it's not working for me is that the site Im reaching have turned on
https encryption. TLS encrypted tunnel prevents me from seeing HTTP headers,
which means I cannot distinguish individual responses :-(
The only way around that is to intercept and decrypt the HTTPS using
Squid's SSL-Bump features.
<https://wiki.squid-cache.org/Features/SslPeekAndSplice>

SSL-Bump requires that you are in a situation where you can install
trusted CA certificates into all client devices. Even if the decrypt is
possible there are legal implications which vary around the world, so
please do check with a lawyer before going ahead with it.

Amos

Continue reading on narkive:
Loading...