Commit d74fdc78 authored by Tails developers's avatar Tails developers
Browse files

More bugs cleanup and information salvaging.

parent 5e2ba567
For Tails [[!taglink release/2.0]], we want at least basic UEFI boot
including Mac.
Some hardware ([[bugs/ThinkPad_X220_vs_GPT]], recent Mac) cannot boot
Tails from USB, due to firmware limitations. Making Tails support UEFI
Some hardware (ThinkPad X220, recent Mac) cannot boot
Tails from USB, due to firmware limitations.
Making Tails support UEFI
would fix this problem on such hardware.
* lernstick Grub configuration, implemented as a live-build binary
......
Please ship Tails with the Linux Standard Base (lsb) so that certain printer drivers can be installed.
> Tails 0.11 already ships all dependencies of the LSB 3.2 Printing
> Package, so I'm unsure what you mean. Please clarify. (For the
> record, I've added the `lsb-printing` package to the Tails packages
> list so that this remains true.)
>> In Tails 0.11, dpkg states that lsb is not installed which leads to installation failure.
Otherwise, there is no chance to get those printer drivers started.
> Perhaps you should be more specific about what printer?
> In may experience, Tails supports out-of-the-box most printers I've thrown at it.
>> Tails does not support certain all-in-one printers out-of-the-box. But this is not the problem because the required driver could be added manually if the lsb package were included in Tails.
>>> Sorry, it's unclear to me whether you are guessing or reporting
>>> actual facts. Have you experienced the following with a given
>>> printer:
>>>
>>> - the printer does not work out-of-the-box on Tails 0.11
>>> - pristine Tails 0.11 does not allow installing the required driver
>>> - Tails 0.11, once the `lsb` Debian package is installed, allows
>>> installing the required driver
>>>
>>> ?
>>>
>>> If you did, then we have to investigate what exact package Tails
>>> lacks. In Debian, `lsb` is a meta-package (ships nothing, only
>>> dependencies), and it depends on stuff we certainly don't want to
>>> install in Tails. On Tails 0.11, `apt-get install lsb` wants to
>>> install:
>>>
>>> alien at cups-bsd debhelper dpkg-dev ed exim4 exim4-base
>>> exim4-config exim4-daemon-light heirloom-mailx html2text
>>> intltool-debian libc-dev-bin libc6-dev libdpkg-perl libelf1
>>> libqt3-mt libqt4-gui libqt4-opengl libqt4-sql-sqlite librpm1
>>> librpmbuild1 librpmio1 linux-libc-dev lsb lsb-core lsb-cxx
>>> lsb-desktop lsb-graphics lsb-release m4 make ncurses-term pax
>>> po-debconf rpm rpm-common rpm2cpio time
>>>
>>> ... and it's not obvious to me which one of these packages would
>>> make a difference in the Tails ability to install printer drivers.
>>> The RPM and archive tools, maybe?
>>>
>>> If the above scenario is facts, may you please try to install
>>> these packages one after the other, to see which one makes
>>> a difference regarding the ability to install the required driver
>>> for this printer?
>>>> I will test it and tell you about my experiences.
>>>>> No report so far, closing. Feel free to reopen. [[done]]
HTP connects to HTTPS sites, and fails if it can't verify the certificates.
=> If the system clock is too much wrong, HTP does not fix it and Tor
cannot connect to the network.
Possible kludge that could be setup until Tor is able to tell us the
correct time itself: inside the live system, we can figure out when it
was released. If HTP fails a first time, and if the current time clock
is different by more than 6 (?) months, we start by setting the time
clock to the live system release before attempting HTP once more.
> [[done]] in Tails 0.8
Tested using Privacybox.de POP3 connection
Setting Claws to delete old messages doesn't work. Individually deleting messages also doesn't work on PrivacyBox or TorMail, deletes locally but not on the server.
Tested with Thunderbird and it worked for both PrivacyBox and Tormail.
> Claws-mail default setting for POP accounts is to delete the messages on the server *7days after the reception*. Did you tried modifying this setting?
>> No reply, closing.
[[done]]
The "network.dns.disableIPv6" setting of Iceweasel should be set to
true.
> This is been set in `/etc/iceweasel/pref/iceweasel.js` and now needs
> to be tested (some Iceweasel settings must be set there, some others
> in `/etc/iceweasel/profile/user.js` to be taken into account).
> --intrigeri
>> I just checked it was working as intended in 0.6.1.
I don't know if this can cause information leakage in 0.5.
> 0.5 has a broken IPv6 firewall so if a IPv6 address happens to be
> given by the resolver to Iceweasel, it would connect to this address
> bypassing Tor. Any DNS requests Tails 0.5 does go through the Tor
> resolver. Therefore the possibility of information leakage depends
> on the answer to: does the Tor DNS resolver filters out IPv6
> addresses in responses? --intrigeri
>> 0.5 has been obsoleted more than two months ago.
[[done]]
# Description of the bug
Virtual keyboards (at least Onboard and Florence) doesn't work at startup in
squeeze : when trying to use it in azerty, it passes in qwerty when the window
is clicked.
For the record: alternative virtual keyboards are listed and compared on
[[another todo item|todo/virtual_keyboard_in_Debian]].
# Workarounds
## setxkbmap
If one sets the X11 keyboard layout, e.g. using `gnome-keyboard-properties` or
`setxkbmap`, it works.
## Relation with gnome-at-spi
This bug also affects florence if `gnome-at-spi` is not launched.
If one enable gnome accessibility from `preferences` → `accessibility tools`
and then logout and login, then the virtual keyboards work **but** if gdm (and
thus X) is restarted then it doesn't work anymore.
# Faulty X call
Florence is affected by exactly the same bug, so it seems that it's a bug
outside of the virtual keyboard, but rather somewhere in X libs.
It appears that `python-virtkey` function `virtkey_get_current_group_name` in
`/src/python-virtkey.c` returns `USA` instead of `France` when the bug appears.
The wrong value is obtained from the following code (with a lot af more
security checks):
XkbGetState(display, XkbUseCoreKbd, &state
state.locked_group
Atom atom = cvirt->kbd->names->groups[group];
char * group_name = XGetAtomName(display, atom);
> This might have been fixed in the meantime. Let's check if
> `python-virtkey` or the X libs have been updated since the original
> bug report (Nov 1st 2010), re-test this, and maybe report bugs
> upstream.
>>
>> As of january 2011, it is **not** fixed
>>
>>> We workaround'ed this bug in the devel branch (commit 212032e65ad)
>>> => [[done]] in 0.7.
Here is Vidalia debug info:
It repeats the following over and over until .onion sites time out. Connecting to any clearnet site through an exit node still works fine. Tried this on tormail, privacybox deepnet .onion, and multiple others.
Dec 04 02:37:06.329 [Info] connection_ap_handshake_attach_circuit(): pending-join circ 10592 already here, with intro ack. Stalling. (stream 26 sec old)
Dec 04 02:37:07.329 [Debug] circuit_get_open_circ_or_launch(): one on the way!
Dec 04 02:37:07.330 [Info] connection_ap_handshake_attach_circuit(): pending-join circ 10592 already here, with intro ack. Stalling. (stream 27 sec old)
Repeats then:
SOCKS error: TTL expired
> This has now been confirmed. When Tails sets the initial system time through
> [tordate](todo/remove_the_htp_user_firewall_exception), the time can be
> incorrect up to 1.5 hours, but hidden services require a time that is at most
> 0.5 hour incorrect. This also explains the reported frequency of this issue
> occuring.
>
> Tails will set the time much more accurately when htpdate finishes, and
> connecting to hidden services should work just fine then. If the user tries
> to access a hidden service before the time is set, Tor's inability to handle
> clock jumps may render that hidden service inaccessible until Tor is
> restarted. Hence it seems we have to ressurect the htpdate notification we
> had earlier, although with a message urging users to not connecting to
> hidden services just yet. At least as short term solution.
>
> It should also be noted that hidden services running a recent enough Tor
> (>=0.2.3.7-alpha) will not produce this problem, see [[!tor_bug 3460]].
> For instance, `duskgytldkxiuqc6.onion` is not affected and it's known to
> run a sufficiently new Tor.
>> Worked around in Tails 0.10.
[[done]]
As [written on thinkwiki](http://www.thinkwiki.org/wiki/Category:X220)
and confirmed by a Tails user, the "X220 cannot/will not boot GPT
disks using Legacy BIOS, you must setup UEFI". This is a bug in
their firmware.
In practice, that means a Tails pendrive setup by the Tails USB
installer cannot boot on a ThinkPad X220, unless we add UEFI support
to Tails.
The same applies to ThinkPad T520 and E325. Other models are listed on
[[support/known_issues]].
The solution is to implement UEFI support: [[!tails_ticket 5739]].
In the meantime, a workaround is to use the [[manual installation
process|doc/first_steps/manual_usb_installation/linux]]. Note,
however, that this technique does not allow you to set up
a persistent volume.
[[done]]
Hi Tails developers,
Here a bug: I run Tails from USB, when desktop shows, I start my wireless connection using the icon up-right corner, when connection established I receive the black-popup alert about "Tor-time-syncronization".. 4 times upon 5, Tor doesnt start and idle for 10 minutes, after it comes out another black-popup telling the synchronization failed, Tor starts but ends because cant connect, and after few seconds system becomes unstable and hungs totally, needs reset.. I tried to run tor manually but receive the error "port already binded"..
Question/workaround: is it possible to disable the "Tor-time-syncronization"?
Thanks :)
>> Likely to be fixed in feature/tordate and thus devel branches,
>> [[done]] in 0.9.
>> See [[todo/remove_the_htp_user_firewall_exception]] for details.
#### TAILS 0.8
After establishing Tor circuit and automatically starting Iceweasel warning appears in Vidalia Message Log:
* *Basic* <br />
[05:54:14] Potentially Dangerous Connection! - One of your applications established a connection through Tor to "8.8.8.8:53" using a protocol that may leak information about your destination. Please ensure you configure your applications to use only SOCKS4a or SOCKS5 with remote hostname resolution.
* *Advanced* <br />
Sep 26 05:54:14.291 [Warning] Your application (using socks4 to port 53) is giving Tor only an IP address. Applications that do DNS resolves themselves may leak information. Consider using Socks4A (e.g. via privoxy or socat) instead. For more information, please see https://wiki.torproject.org/TheOnionRouter/TorFAQ#SOCKSAndDNS.
> This warning can be safely ignored. The problem the warning is
> hinting at is this: DNS (which uses port 53) can be done both over
> TCP and UDP. Tor only supports TCP, so if the application chooses to
> use UDP for whatever reason, that query may not go through Tor.
> Hence any one monitoring your connection would then see which site
> you're trying to reach, which is really bad. However, in Tails that
> cannot happen becuase of our [[DNS setup|contribute/design/Tor_enforcement/DNS]]
> and [[contribute/design/Tor_enforcement]] in general.
>> Thus closing. [[done]]
EDIT: Sometimes Iceweasel starts automatically with the system, sometimes not but the warning appears irrespective of it.
> This would be a different bug, because it has nothing to do with this.
EDIT#2: OPs were not specific, but for me, on 2 different hardware clients (HP tower & Dell LT), using different circuits, from different ISPs, in Tails 0.8 Tor failed to load (circuit node search timed out) after announcing this error in the Vidalia message log. Tails 0.72 had been running perfectly on the HP. YMMV but for some of us it appears that unedited v0.8 is completely broken...
> This would also be a different bug. Please open a new one and supply
> some more information. However, it may have been related to the
> recent [[issues with one of our time sources|bugs/__34__Clock_is_approx._6_months_after_the_release_date__34___but_it_was_set_correctly]]
> so you might want to retry.
Here's a patch for /etc/NetworkManager/dispatcher.d/50-htp.sh
39d38
< mail.riseup.net
177c176
< if [ "$(($release_date_secs + 259200))" -lt "$current_date_secs" ]; then
---
> if [ "$(($release_date_secs + 15552000))" -lt "$current_date_secs" ]; then
For the last two days, mail.riseup.net is frequently failing, triggering htpdate's paranoid mode. Thus /usr/local/sbin/htpdate returns an error to 50-htp.sh, which checks is_clock_way_off and finds the clock is more than 3 days (rather than the intended 180 days) later than Tails 0.8 release date.
So, Tails boots up with the hardware clock set correctly, sets the software clock back to September 16, and then can't connect to Tor.
Output at /var/log/htpdate.log looks something like this:
Running htpdate.
https://ssl.scroogle.org (took 1s) => diff = 1 second(s)
https://www.torproject.org (took 1s) => diff = 1 second(s)
No downloaded files can be found
No file could be downloaded from https://mail.riseup.net.
Paranoid mode: aborting as one server ( https://mail.riseup.net ) could not be reached
Clock is approx. 6 months after the release date
Running htpdate.
https://ssl.scroogle.org (took 1s) => diff = 1000000 second(s)
https://www.torproject.org (took 1s) => diff = 1000000 second(s)
No downloaded files can be found
No file could be downloaded from https://mail.riseup.net.
Paranoid mode: aborting as one server ( https://mail.riseup.net ) could not be reached
htpdate exited with return code 25
If you're going to use paranoid mode, then you should stick to the most reliable servers. Apparently mail.riseup.net cannot always handle the traffic.
> The problem is fixed now, but It was not mail.riseup.net being
> unable to handle the traffic. Between 2011-09-28 and 2011-09-29
> mail.riseup.net resolved to {204.13.164.27, 204.13.164.32,
> 204.13.164.33, 198.252.153.55, 198.252.153.56} (now it only resolves
> to the two last ones). The problem was that 204.13.164.x did a HTTP
> 302 redirect to fulvetta.riseup.net or fruiteater.riseup.net. These
> we're not in /etc/hosts and thus had to be resolved with pdnsd which
> uses Tor, which wasn't started yet so it failed (there's also an
> iframe from user.riseup.net which doesn't resolve and thus doesn't
> download -- it doesn't make wget fail, but it's very
> fingerprintable).
> Also, we made htpdate pass "--dns-timeout 1" to wget to workaround a
> weird issue with we had with some other time sources (see commit
> e291af5), so even if we would have a real, working DNS server
> available for the htp user when it runs wget, this would fail if the
> time required to query the DNS server is longer than one second.
> I think the /etc/hosts approach needs to be reconsidered -- the
> current solution is too hackish and far from robust (and also
> fingerprintable). The best would be, I suppose, per-process DNS
> settings. I'm not aware of any tool with such capabilities so I
> assume we'd have to write a libresolv wrapper which shouldn't be
> _too_ difficult, but the question is if we want to maintain
> something like that. Another solution would be to temporarily set
> the DNS server got through DHCP in /etc/resolv.conf (dangerous!?).
>> The way we'll solve this class of issues is [[documented
>> there|todo/remove_the_htp_user_firewall_exception]]. Closing this
>> bug as a duplicate of what we already know we have to do.
I'm reopening this because the closer showed no acknowledgment of the 3-day problem in /etc/NetworkManager/dispatcher.d/50-htp.sh
DNS resolution aside, you still need to apply this patch:
177c176
< if [ "$(($release_date_secs + 259200))" -lt "$current_date_secs" ]; then
---
> if [ "$(($release_date_secs + 15552000))" -lt "$current_date_secs" ]; then
>>> Right. Applied to feature/tordate branch, thanks!
[[done]] in 0.9.
It has been reported that the IPv6 firewall was not active on Tails 0.5.
sudo ip6tables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
> Tails has no service listening on the network, and Tor is not able
> to resolve hostnames to IPv6 yet, so AFAIK the only problem caused
> by this bug is the possibility to directly connect (= *without*
> going through the Tor network) to a manually entered IPv6 address.
> The problem probably only arises when using an application that is
> not configured to use Polipo/Tor as a proxy; this means an
> application that is neither Iceweasel nor a GNOME application nor a
> software respecting `http_proxy` or `HTTP_PROXY`.
>
> The risk seems so tiny to me I think it does not deserve a security
> announce. The bug still needs to be fixed though!
> Fixed by this very commit => [[done]].
Problem with [[todo/erase_memory_on_shutdown]]: when closing Amnesia
and shutting down, my PC shows rows upon rows of SQUASHFS errors, and
it doesn't seem to stop. I tried to ppress CTRL+C but nothing happens.
What should I do, besides radically unplugging the PC? Any advice
would be most welcome.
> This is a bug that needs to be fixed. In the meanwhile, yes, you can
> unplug the PC.
>> This bug has been fixed along with the
>> [[todo/smem_progress_report]]. Not tagging as `done` or `pending`
>> has this work has neither been merged into our master
>> `live-initramfs` branch yet, nor been integrated into our main repo
>> as a custom `.deb`.
>>> Now in Git => pending => done.
>>>> I hate it but we must reopen this bug. The squashfs errors
>>>> strikes back in 0.5 although I'm pretty sure we had got rid of
>>>> them at some point. A solution based on `/dev/shm` might be more
>>>> robust wrt. caching the necessary files. Else the heavy, never
>>>> released Perl version could still be pulled back.
>>>> The current error message (Lenny image built from Git on 20100903)
>>>> is:
I/O error, dev sr0, sector 43576
Buffer I/O error on device sr0, logical block 10894
Buffer I/O error on device sr0, logical block 10895
Buffer I/O error on device sr0, logical block 10896
Buffer I/O error on device sr0, logical block 10897
Buffer I/O error on device sr0, logical block 10898
Buffer I/O error on device sr0, logical block 10899
Buffer I/O error on device sr0, logical block 10900
Buffer I/O error on device sr0, logical block 10901
SQUASHFS error: squashfs_read_data failed to read block 0x88105b
>>>>> Now that the smem process is pretty quick we could avoid
>>>>> ejecting the CD before erasing the memory => this might be
>>>>> enough to fix this bug.
>>>>>> Fixed by the new kexec-based sdmem system implemented in the
>>>>>> devel branch.
>>>>>> This was tested using [[contribute/release_process/test/erase_memory_on_shutdown]] and
>>>>>> seems to work as expected.
>>>>>>> Removed the pending tag, the new implementation still has a bug : the
>>>>>>> machine isn't rebooted or shutdown after memory wiping. The system
>>>>>>> hangs on the line "Starting new kernel" without any way for the user to
>>>>>>> be sure the memory wiping went fine.
>>>>>>>> I believe we need to move your bug report to a dedicated one,
>>>>>>>> where more information (e.g. hardware, RAM) can be provided:
>>>>>>>> the new system has problem with your test machine, but it
>>>>>>>> generally works. Indeed, it has been shown to be working on a
>>>>>>>> few other test systems. The only bugs we have seen are
>>>>>>>> display corruption on KMS-enabled systems, probably due to
>>>>>>>> failure from the kernel to init the display from a
>>>>>>>> non-fresh-boot graphic mode.
>>>>>>>>> Seems that this situation happened only on very specific and probably
>>>>>>>>> not really functionnal hardware, so should be fine to just mark it as
>>>>>>>>> [[done]]
>>>>>>> This might due to the /sbin/halt binary being wiped in memory (as long
>>>>>>> as all the initramfs).
>>>>>>>> I am not sure about it, but I doubt the kernel would let an
>>>>>>>> userspace process wipe the initramfs from the memory.
>>>>>>> This script should at least be able to shutdown the machine or output to
>>>>>>> the user that he/she can power it off when the sdmem process has
>>>>>>> finished, otherwise the user might be lost in front of this freezed
>>>>>>> screen.
>>>>>>>> On the systems that were used for development and testing,
>>>>>>>> the new system does shutdown/reboot the machine accordingly
>>>>>>>> to what the user initially asked. About the need for feedback
>>>>>>>> in case of failure: I do agree it would be nice, but we have
>>>>>>>> to diagnose first *when* your test system is crashing. If -as
>>>>>>>> I suspect- it crashes during early initramfs stages (e.g.
>>>>>>>> graphics initialization) we cannot do anything about it...
>>>>>>>> but fix the crash.
>>>>>>> This might also conflict with the
>>>>>>> [[memory erasement on media removal feature|todo/erase_memory_when_the_USB_stick_is_removed]],
>>>>>>> which would require to have the machine quicly shutdown.
>>>>>>>> I am sorry not to understand this one.
Claws Mail (now using GnuTLS) believes some SSL certificates provided
by Gandi are wrong:
Signature status: No certificate issuer found
... although the gnutls-cli utility thinks the contrary:
gnutls-cli -V -p 993 \
--x509cafile /etc/ssl/certs/UTN_USERFirst_Hardware_Root_CA.pem \
mail.riseup.net
This seems to be a bug in Claws Mail usage of GnuTLS.
> There have been changes in this field between these initial tests
> and now (20101223). We need to test this using the devel branch,
> and maybe the newer sid packages.
>> I could reproduce this bug outside of Tails with Debian Squeeze's
>> claws-mail (3.7.6-4).
This probably is
[Claws Mail bug #2199](http://www.thewildbeast.co.uk/claws-mail/bugzilla/show_bug.cgi?id=2199).
We should provide more information there, and help us understand this
is no enhancement request but rather a defect.
> Just for the record: it's not very likely that we find time and
> motivation to fix this, given we've decided to
> [[migrate back to icedove|todo/Return of Icedove?]]...
>> Closing, this is the last item in `bugs/*`, there's basically no
>> chance we work on this before moving to icedove, and it's
>> tracked upstream.
[[done]]
In Tails 0.12 we introduced `torsocks` as a replacement for `tsocks`
used by the `torify` script. The switch to `torsocks` made Claws Mail,
which is started using `torify`, leak the hostname in the HELO/EHLO
message, resulting in a hostname leak in the `Message ID` and
`Received` email headers. This is currently being worked around by
switching back to `tsocks` for Claws Mail only (in branch
`bugfix/claws_vs_torsocks`). See [[todo/applications_audit]] for the
more general issue.
> Fixed in Tails 0.12.1.
[[done]]
In Tails 0.10, Git cannot access `git://` URLs anymore, due to the end
of the transparent proxying.
Workaround: prefix your `git` commands with `torify`, such as:
$ torify git pull
> Here seems to be a (possible) solution :
> <http://www.patthoyts.tk/blog/using-git-with-socks-proxy.html>
>> Fixed in `bugfix/dumb_git` branch, merged into devel and stable
>> branches.
>>> [[done]] in Tails 0.10.1.
The page at [[contribute/design/I2P]] describes a regexp:
urls matching `^http://(127.0.0.1)|(localhost):7657(/.*)?` will get a direct connection to the local host so the I2P router console can be reached.
According to the FoxyProxy regex documentation (http://getfoxyproxy.org/patterns.html), FoxyProxy uses ECMAScript-compatible regexps, so what that regexp actually means is:
Any URL beginning "http://127[any char]0[any char]0[any char]1" OR any URI containing the string "localhost:7657" anywhere within it, will evade the proxy.
It's probably meant to be:
^http://(127\.0\.0\.1|localhost):7657(/.*)?$
Specifically, the changes here are:
1) we escape the wildcard dot characters to accept only a literal dot.
Failing to change this means that a URL like the following would match:
http://127x0y0z1/maliciousscript.php:7657
So, a machine named 127x0y0z1 on the same LAN could be accessed without the proxy. Not that machine names should begin with numbers, but still...
1a) You might also want to consider using `(?:blah1|blah2)` instead of `(blah1|blah2)` for performance reasons, unless you actually need to capture blah1 or blah2 for later use. But that's not a security thing, may make no difference, and may reduce readability/maintainability.
2) We place the "or" character inside the braces, where it separates only the halves of the braced clause, rather than having it separate the entire URL in two.
Failing to change this means that a URL like the following would match:
http://example.com/maliciousscript.php?localhost:7657
3) Added the final $ anchor, without which the final (/.*)? became meaningless.
Failing to change this means that URLs like the following would match:
http://localhost:76579
or
http://localhost:7657?something=bad
Not a terrible risk, but who can tell what's running on port 76579? So if you want to guarantee anything following the port number is separated by a slash, you need that anchor.
While we're at it, let's look at the other regexps on that page.
`^https?://[^/]+\.i2p(:[0-9]{1,5})?(/.*)?`
Well, the following would match:
http://malicious.example.com?.i2p
That is, a regular .com site could be sent through the .i2p filter. No idea if that could be exploited, but let's fix that up anyway.
^https?://[-a-zA-Z0-9.]+\.i2p(:[0-9]{1,5})?(/.*)?$
Here, I've made a white-list for the domain name, instead of a blacklist; and again, I've added the terminating `$` anchor so that the `(/.*)?` is meaningful.
Again, the brackets (blah) should probably be non-capturing brackets, like `(?:blah)` for speed, but this reduces readability and maintainability, so I didn't include it above.
The third regexp looks fine to me.
[[todo/FTP_in_Iceweasel]] describes some more regexps. Let's check them, too.
`ftp://.*`
`http(s)?://.*`
These both need an anchor ^ at the beginning, and the ending `.*` seems pointless, since there's no end anchor. Also, the brackets: they do nothing. I'd replace these with:
`^ftp://`
`^https?://`
Finally, there's: `http://[a-zA-Z0-9\.]*\.i2p(/.*)?`
That's better than the last .i2p regexp, but you don't need to escape a dot if it's in a group; domain names can include dashes, too; you need to deal with alternative port numbers; and you need beginning and ending anchors. So the version listed earlier is probably a better choice: