This commit is contained in:
traveler 2025-04-17 17:29:05 -05:00
commit 5aa7d034f7
3292 changed files with 465160 additions and 0 deletions

20
.editorconfig Executable file
View file

@ -0,0 +1,20 @@
# EditorConfig helps developers define and maintain consistent
# coding styles between different editors and IDEs
# editorconfig.org
root = true
[*]
# Change these settings to your own preference
indent_style = space
indent_size = 2
# We recommend you to keep these unchanged
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[*.md]
trim_trailing_whitespace = false

1
.env Symbolic link
View file

@ -0,0 +1 @@
mailcow.conf

69
.gitignore vendored Executable file
View file

@ -0,0 +1,69 @@
!data/conf/nginx/dynmaps.conf
!data/conf/nginx/meta_exporter.conf
!data/conf/nginx/site.conf
!/**/.gitkeep
*.iml
.idea
.vscode/*
data/assets/ssl-example/*
data/assets/ssl/*
data/conf/borgmatic/
data/conf/clamav/whitelist.ign2
data/conf/dovecot/acl_anyone
data/conf/dovecot/dovecot-master.passwd
data/conf/dovecot/dovecot-master.userdb
data/conf/dovecot/extra.conf
data/conf/dovecot/mail_replica.conf
data/conf/dovecot/global_sieve_*
data/conf/dovecot/last_login
data/conf/dovecot/lua
data/conf/dovecot/mail_plugins*
data/conf/dovecot/shared_namespace.conf
data/conf/dovecot/sni.conf
data/conf/dovecot/sogo-sso.conf
data/conf/dovecot/sogo_trusted_ip.conf
data/conf/dovecot/sql
data/conf/nextcloud-*.bak
data/conf/nginx/*.active
data/conf/nginx/*.bak
data/conf/nginx/*.conf
data/conf/nginx/*.custom
data/conf/phpfpm/sogo-sso/sogo-sso.pass
data/conf/portainer/
data/conf/postfix/allow_mailcow_local.regexp
data/conf/postfix/custom_postscreen_whitelist.cidr
data/conf/postfix/custom_transport.pcre
data/conf/postfix/extra.cf
data/conf/postfix/sni.map
data/conf/postfix/sni.map.db
data/conf/postfix/sql
data/conf/postfix/dns_blocklists.cf
data/conf/postfix/dnsbl_reply.map
data/conf/rspamd/custom/*
data/conf/rspamd/local.d/*
data/conf/rspamd/override.d/*
data/conf/sogo/custom-theme.js
data/conf/sogo/plist_ldap
data/conf/sogo/sieve.creds
data/conf/sogo/sogo-full.svg
data/gitea/
data/gogs/
data/hooks/dovecot/*
data/hooks/phpfpm/*
data/hooks/postfix/*
data/hooks/rspamd/*
data/hooks/sogo/*
data/hooks/unbound/*
data/web/templates/cache/*
!data/web/templates/cache/.gitkeep
data/web/.well-known/acme-challenge
data/web/css/build/0081-custom-mailcow.css
data/web/inc/vars.local.inc.php
data/web/inc/app_info.inc.php
data/web/nextcloud*/
data/web/rc*/
mailcow.conf_backup
rebuild-images.sh
refresh_images.sh
update_diffs/
create_cold_standby.sh

46
CODE_OF_CONDUCT.md Executable file
View file

@ -0,0 +1,46 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, documentation edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at info@servercow.de. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

58
CONTRIBUTING.md Executable file
View file

@ -0,0 +1,58 @@
# Contribution Guidelines
**_Last modified on 15th August 2024_**
First of all, thank you for wanting to provide a bugfix or a new feature for the mailcow community, it's because of your help that the project can continue to grow!
As we want to keep mailcow's development structured we setup these Guidelines which helps you to create your issue/pull request accordingly.
**PLEASE NOTE, THAT WE MIGHT CLOSE ISSUES/PULL REQUESTS IF THEY DON'T FULLFIL OUR WRITTEN GUIDELINES WRITTEN INSIDE THIS DOCUMENT**. So please check this guidelines before you propose a Issue/Pull Request.
## Topics
- [Pull Requests](#pull-requests)
- [Issue Reporting](#issue-reporting)
- [Guidelines](#issue-reporting-guidelines)
- [Issue Report Guide](#issue-report-guide)
## Pull Requests
**_Last modified on 15th August 2024_**
However, please note the following regarding pull requests:
1. **ALWAYS** create your PR using the staging branch of your locally cloned mailcow instance, as the pull request will end up in said staging branch of mailcow once approved. Ideally, you should simply create a new branch for your pull request that is named after the type of your PR (e.g. `feat/` for function updates or `fix/` for bug fixes) and the actual content (e.g. `sogo-6.0.0` for an update from SOGo to version 6 or `html-escape` for a fix that includes escaping HTML in mailcow).
2. **ALWAYS** report/request issues/features in the english language, even though mailcow is a german based company. This is done to allow other GitHub users to reply to your issues/requests too which did not speak german or other languages besides english.
3. Please **keep** this pull request branch **clean** and free of commits that have nothing to do with the changes you have made (e.g. commits from other users from other branches). *If you make changes to the `update.sh` script or other scripts that trigger a commit, there is usually a developer mode for clean working in this case.*
4. **Test your changes before you commit them as a pull request.** <ins>If possible</ins>, write a small **test log** or demonstrate the functionality with a **screenshot or GIF**. *We will of course also test your pull request ourselves, but proof from you will save us the question of whether you have tested your own changes yourself.*
5. **Please use** the pull request template we provide once creating a pull request. *HINT: During editing you encounter comments which looks like: `<!-- CONTENT -->`. These can be removed or kept, as they will not rendered later on GitHub! Please only create actual content without the said comments.*
6. Please **ALWAYS** create the actual pull request against the staging branch and **NEVER** directly against the master branch. *If you forget to do this, our moobot will remind you to switch the branch to staging.*
7. Wait for a merge commit: It may happen that we do not accept your pull request immediately or sometimes not at all for various reasons. Please do not be disappointed if this is the case. We always endeavor to incorporate any meaningful changes from the community into the mailcow project.
8. If you are planning larger and therefore more complex pull requests, it would be advisable to first announce this in a separate issue and then start implementing it after the idea has been accepted in order to avoid unnecessary frustration and effort!
---
## Issue Reporting
**_Last modified on 15th August 2024_**
If you plan to report a issue within mailcow please read and understand the following rules:
### Issue Reporting Guidelines
1. **ONLY** use the issue tracker for bug reports or improvement requests and NOT for support questions. For support questions you can either contact the [mailcow community on Telegram](https://docs.mailcow.email/#community-support-and-chat) or the mailcow team directly in exchange for a [support fee](https://docs.mailcow.email/#commercial-support).
2. **ONLY** report an error if you have the **necessary know-how (at least the basics)** for the administration of an e-mail server and the usage of Docker. mailcow is a complex and fully-fledged e-mail server including groupware components on a Docker basement and it requires a bit of technical know-how for debugging and operating.
3. **ALWAYS** report/request issues/features in the english language, even though mailcow is a german based company. This is done to allow other GitHub users to reply to your issues/requests too which did not speak german or other languages besides english.
4. **ONLY** report bugs that are contained in the latest mailcow release series. *The definition of the latest release series includes the last major patch (e.g. 2023-12) and all minor patches (revisions) below it (e.g. 2023-12a, b, c etc.).* New issue reports published starting from January 1, 2024 must meet this criterion, as versions below the latest releases are no longer supported by us.
5. When reporting a problem, please be as detailed as possible and include even the smallest changes to your mailcow installation. Simply fill out the corresponding bug report form in detail and accurately to minimize possible questions.
6. **Before you open an issue/feature request**, please first check whether a similar request already exists in the mailcow tracker on GitHub. If so, please include yourself in this request.
7. When you create a issue/feature request: Please note that the creation does <ins>**not guarantee an instant implementation or fix by the mailcow team or the community**</ins>.
8. Please **ALWAYS** anonymize any sensitive information in your bug report or feature request before submitting it.
### Issue Report Guide
1. Read your logs; follow them to see what the reason for your problem is.
2. Follow the leads given to you in your logfiles and start investigating.
3. Restarting the troubled service or the whole stack to see if the problem persists.
4. Read the [documentation](https://docs.mailcow.email/) of the troubled service and search its bugtracker for your problem.
5. Search our [issues](https://github.com/mailcow/mailcow-dockerized/issues) for your problem.
6. [Create an issue](https://github.com/mailcow/mailcow-dockerized/issues/new/choose) over at our GitHub repository if you think your problem might be a bug or a missing feature you badly need. But please make sure, that you include **all the logs** and a full description to your problem.
7. Ask your questions in our community-driven [support channels](https://docs.mailcow.email/#community-support-and-chat).
## When creating an issue/feature request or a pull request, you will be asked to confirm these guidelines.

674
LICENSE Executable file
View file

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
{one line to give the program's name and a brief idea of what it does.}
Copyright (C) {year} {name of author}
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
{project} Copyright (C) {year} {fullname}
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

45
README.md Executable file
View file

@ -0,0 +1,45 @@
# mailcow: dockerized - 🐮 + 🐋 = 💕
[![Translation status](https://translate.mailcow.email/widgets/mailcow-dockerized/-/translation/svg-badge.svg)](https://translate.mailcow.email/engage/mailcow-dockerized/)
[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/mailcow_email.svg?style=social&label=Follow%20%40mailcow_email)](https://twitter.com/mailcow_email)
![Mastodon Follow](https://img.shields.io/mastodon/follow/109388212176073348?domain=https%3A%2F%2Fmailcow.social&label=Follow%20%40doncow%40mailcow.social&link=https%3A%2F%2Fmailcow.social%2F%40doncow)
## Want to support mailcow?
Please [consider a support contract with Servercow](https://www.servercow.de/mailcow?lang=en#support) to support further development. _We_ support _you_ while _you_ support _us_. :)
You can also [get a SAL](https://www.servercow.de/mailcow?lang=en#sal) which is a one-time payment with no liabilities or returning fees.
Or just spread the word: moo.
## Info, documentation and support
Please see [the official documentation](https://docs.mailcow.email/) for installation and support instructions. 🐄
🐛 **If you found a critical security issue, please mail us to [info at servercow.de](mailto:info@servercow.de).**
## Cowmunity
[mailcow community](https://community.mailcow.email)
[Telegram mailcow channel](https://telegram.me/mailcow)
[Telegram mailcow Off-Topic channel](https://t.me/mailcowOfftopic)
[Official 𝕏 (Twitter) Account](https://twitter.com/mailcow_email)
[Official Mastodon Account](https://mailcow.social/@doncow)
Telegram desktop clients are available for [multiple platforms](https://desktop.telegram.org). You can search the groups history for keywords.
## Misc
**Important**: mailcow makes use of various open-source software. Please assure you agree with their license before using mailcow.
Any part of mailcow itself is released under **GNU General Public License, Version 3**.
mailcow is a registered word mark of The Infrastructure Company GmbH, Parkstr. 42, 47877 Willich, Germany.
The project is managed and maintained by The Infrastructure Company GmbH.
Originated from @andryyy (André)

42
SECURITY.md Executable file
View file

@ -0,0 +1,42 @@
# Security Policies and Procedures
This document outlines security procedures and general policies for the _mailcow: dockerized_ project as found on [mailcow-dockerized](https://github.com/mailcow/mailcow-dockerized).
* [Reporting a Vulnerability](#reporting-a-vulnerability)
* [Disclosure Policy](#disclosure-policy)
* [Comments on this Policy](#comments-on-this-policy)
## Reporting a Vulnerability
The mailcow team and community take all security vulnerabilities
seriously. Thank you for improving the security of our open source
software. We appreciate your efforts and responsible disclosure and will
make every effort to acknowledge your contributions.
Report security vulnerabilities by emailing the mailcow team at:
info at servercow.de
mailcow team will acknowledge your email as soon as possible, and will
send a more detailed response afterwards indicating the next steps in
handling your report. After the initial reply to your report, the mailcow
team will endeavor to keep you informed of the progress towards a fix and
full announcement, and may ask for additional information or guidance.
Report security vulnerabilities in third-party modules to the person or
team maintaining the module.
## Disclosure Policy
When the mailcow team receives a security bug report, they will assign it
to a primary handler. This person will coordinate the fix and release
process, involving the following steps:
* Confirm the problem and determine the affected versions.
* Audit code to find any potential similar problems.
* Prepare fixes for all releases still under maintenance.
## Comments on this Policy
If you have suggestions on how this process could be improved please submit a
pull request.

View file

@ -0,0 +1,28 @@
FROM alpine:3.20
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
RUN apk upgrade --no-cache \
&& apk add --update --no-cache \
bash \
curl \
openssl \
bind-tools \
jq \
mariadb-client \
redis \
tini \
tzdata \
python3 \
acme-tiny --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community/
COPY acme.sh /srv/acme.sh
COPY functions.sh /srv/functions.sh
COPY obtain-certificate.sh /srv/obtain-certificate.sh
COPY reload-configurations.sh /srv/reload-configurations.sh
COPY expand6.sh /srv/expand6.sh
RUN chmod +x /srv/*.sh
CMD ["/sbin/tini", "-g", "--", "/srv/acme.sh"]

431
data/Dockerfiles/acme/acme.sh Executable file
View file

@ -0,0 +1,431 @@
#!/bin/bash
set -o pipefail
exec 5>&1
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
export REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
else
export REDIS_CMDLINE="redis-cli -h redis -p 6379"
fi
until [[ $(${REDIS_CMDLINE} PING) == "PONG" ]]; do
echo "Waiting for Redis..."
sleep 2
done
source /srv/functions.sh
# Thanks to https://github.com/cvmiller -> https://github.com/cvmiller/expand6
source /srv/expand6.sh
# Skipping IP check when we like to live dangerously
if [[ "${SKIP_IP_CHECK}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
SKIP_IP_CHECK=y
fi
# Skipping HTTP check when we like to live dangerously
if [[ "${SKIP_HTTP_VERIFICATION}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
SKIP_HTTP_VERIFICATION=y
fi
# Request certificate for MAILCOW_HOSTNAME only
if [[ "${ONLY_MAILCOW_HOSTNAME}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
ONLY_MAILCOW_HOSTNAME=y
fi
if [[ "${AUTODISCOVER_SAN}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
AUTODISCOVER_SAN=y
fi
# Request individual certificate for every domain
if [[ "${ENABLE_SSL_SNI}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
ENABLE_SSL_SNI=y
fi
if [[ "${SKIP_LETS_ENCRYPT}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
log_f "SKIP_LETS_ENCRYPT=y, skipping Let's Encrypt..."
sleep 365d
exec $(readlink -f "$0")
fi
log_f "Waiting for Docker API..."
until ping dockerapi -c1 > /dev/null; do
sleep 1
done
log_f "Docker API OK"
log_f "Waiting for Postfix..."
until ping postfix -c1 > /dev/null; do
sleep 1
done
log_f "Postfix OK"
log_f "Waiting for Dovecot..."
until ping dovecot -c1 > /dev/null; do
sleep 1
done
log_f "Dovecot OK"
ACME_BASE=/var/lib/acme
SSL_EXAMPLE=/var/lib/ssl-example
mkdir -p ${ACME_BASE}/acme
# Migrate
[[ -f ${ACME_BASE}/acme/private/privkey.pem ]] && mv ${ACME_BASE}/acme/private/privkey.pem ${ACME_BASE}/acme/key.pem
[[ -f ${ACME_BASE}/acme/private/account.key ]] && mv ${ACME_BASE}/acme/private/account.key ${ACME_BASE}/acme/account.pem
if [[ -f ${ACME_BASE}/acme/key.pem && -f ${ACME_BASE}/acme/cert.pem ]]; then
if verify_hash_match ${ACME_BASE}/acme/cert.pem ${ACME_BASE}/acme/key.pem; then
log_f "Migrating to SNI folder structure..."
CERT_DOMAIN=($(openssl x509 -noout -text -in ${ACME_BASE}/acme/cert.pem | grep "Subject:" | sed -e 's/\(Subject:\)\|\(CN = \)\|\(CN=\)//g' | sed -e 's/^[[:space:]]*//'))
CERT_DOMAINS=(${CERT_DOMAIN} $(openssl x509 -noout -text -in ${ACME_BASE}/acme/cert.pem | grep "DNS:" | sed -e 's/\(DNS:\)\|,//g' | sed "s/${CERT_DOMAIN}//" | sed -e 's/^[[:space:]]*//'))
mkdir -p ${ACME_BASE}/${CERT_DOMAIN}
mv ${ACME_BASE}/acme/cert.pem ${ACME_BASE}/${CERT_DOMAIN}/cert.pem
# key is only copied, not moved, because it is used by all other requests too
cp ${ACME_BASE}/acme/key.pem ${ACME_BASE}/${CERT_DOMAIN}/key.pem
chmod 600 ${ACME_BASE}/${CERT_DOMAIN}/key.pem
echo -n ${CERT_DOMAINS[*]} > ${ACME_BASE}/${CERT_DOMAIN}/domains
mv ${ACME_BASE}/acme/acme.csr ${ACME_BASE}/${CERT_DOMAIN}/acme.csr
log_f "OK" no_date
fi
fi
[[ ! -f ${ACME_BASE}/dhparams.pem ]] && cp ${SSL_EXAMPLE}/dhparams.pem ${ACME_BASE}/dhparams.pem
if [[ -f ${ACME_BASE}/cert.pem ]] && [[ -f ${ACME_BASE}/key.pem ]] && [[ $(stat -c%s ${ACME_BASE}/cert.pem) != 0 ]]; then
ISSUER=$(openssl x509 -in ${ACME_BASE}/cert.pem -noout -issuer)
if [[ ${ISSUER} != *"Let's Encrypt"* && ${ISSUER} != *"mailcow"* && ${ISSUER} != *"Fake LE Intermediate"* ]]; then
log_f "Found certificate with issuer other than mailcow snake-oil CA and Let's Encrypt, skipping ACME client..."
sleep 3650d
exec $(readlink -f "$0")
fi
else
if [[ -f ${ACME_BASE}/${MAILCOW_HOSTNAME}/cert.pem ]] && [[ -f ${ACME_BASE}/${MAILCOW_HOSTNAME}/key.pem ]] && verify_hash_match ${ACME_BASE}/${MAILCOW_HOSTNAME}/cert.pem ${ACME_BASE}/${MAILCOW_HOSTNAME}/key.pem; then
log_f "Restoring previous acme certificate and restarting script..."
cp ${ACME_BASE}/${MAILCOW_HOSTNAME}/cert.pem ${ACME_BASE}/cert.pem
cp ${ACME_BASE}/${MAILCOW_HOSTNAME}/key.pem ${ACME_BASE}/key.pem
# Restarting with env var set to trigger a restart,
exec env TRIGGER_RESTART=1 $(readlink -f "$0")
else
log_f "Restoring mailcow snake-oil certificates and restarting script..."
cp ${SSL_EXAMPLE}/cert.pem ${ACME_BASE}/cert.pem
cp ${SSL_EXAMPLE}/key.pem ${ACME_BASE}/key.pem
exec env TRIGGER_RESTART=1 $(readlink -f "$0")
fi
fi
chmod 600 ${ACME_BASE}/key.pem
log_f "Waiting for database..."
while ! /usr/bin/mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent > /dev/null; do
sleep 2
done
log_f "Database OK"
log_f "Waiting for Nginx..."
until $(curl --output /dev/null --silent --head --fail http://nginx.${COMPOSE_PROJECT_NAME}_mailcow-network:8081); do
sleep 2
done
log_f "Nginx OK"
log_f "Waiting for resolver..."
until dig letsencrypt.org +time=3 +tries=1 @unbound > /dev/null; do
sleep 2
done
log_f "Resolver OK"
# Waiting for domain table
log_f "Waiting for domain table..."
while [[ -z ${DOMAIN_TABLE} ]]; do
curl --silent http://nginx.${COMPOSE_PROJECT_NAME}_mailcow-network/ >/dev/null 2>&1
DOMAIN_TABLE=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SHOW TABLES LIKE 'domain'" -Bs)
[[ -z ${DOMAIN_TABLE} ]] && sleep 10
done
log_f "OK" no_date
log_f "Initializing, please wait..."
while true; do
POSTFIX_CERT_SERIAL="$(echo | openssl s_client -connect postfix:25 -starttls smtp 2>/dev/null | openssl x509 -inform pem -noout -serial | cut -d "=" -f 2)"
DOVECOT_CERT_SERIAL="$(echo | openssl s_client -connect dovecot:143 -starttls imap 2>/dev/null | openssl x509 -inform pem -noout -serial | cut -d "=" -f 2)"
POSTFIX_CERT_SERIAL_NEW="$(echo | openssl s_client -connect postfix:25 -starttls smtp 2>/dev/null | openssl x509 -inform pem -noout -serial | cut -d "=" -f 2)"
DOVECOT_CERT_SERIAL_NEW="$(echo | openssl s_client -connect dovecot:143 -starttls imap 2>/dev/null | openssl x509 -inform pem -noout -serial | cut -d "=" -f 2)"
# Re-using previous acme-mailcow account and domain keys
if [[ ! -f ${ACME_BASE}/acme/key.pem ]]; then
log_f "Generating missing domain private rsa key..."
openssl genrsa 4096 > ${ACME_BASE}/acme/key.pem
else
log_f "Using existing domain rsa key ${ACME_BASE}/acme/key.pem"
fi
if [[ ! -f ${ACME_BASE}/acme/account.pem ]]; then
log_f "Generating missing Lets Encrypt account key..."
if [[ ! -z ${ACME_CONTACT} ]]; then
if ! verify_email "${ACME_CONTACT}"; then
log_f "Invalid email address, will not start registration!"
sleep 365d
exec $(readlink -f "$0")
else
ACME_CONTACT_PARAMETER="--contact mailto:${ACME_CONTACT}"
log_f "Valid email address, using ${ACME_CONTACT} for registration"
fi
else
ACME_CONTACT_PARAMETER=""
fi
openssl genrsa 4096 > ${ACME_BASE}/acme/account.pem
else
log_f "Using existing Lets Encrypt account key ${ACME_BASE}/acme/account.pem"
fi
chmod 600 ${ACME_BASE}/acme/key.pem
chmod 600 ${ACME_BASE}/acme/account.pem
unset EXISTING_CERTS
declare -a EXISTING_CERTS
for cert_dir in ${ACME_BASE}/*/ ; do
if [[ ! -f ${cert_dir}domains ]] || [[ ! -f ${cert_dir}cert.pem ]] || [[ ! -f ${cert_dir}key.pem ]]; then
continue
fi
EXISTING_CERTS+=("$(basename ${cert_dir})")
done
# Cleaning up and init validation arrays
unset SQL_DOMAIN_ARR
unset VALIDATED_CONFIG_DOMAINS
unset ADDITIONAL_VALIDATED_SAN
unset ADDITIONAL_WC_ARR
unset ADDITIONAL_SAN_ARR
unset CERT_ERRORS
unset CERT_CHANGED
unset CERT_AMOUNT_CHANGED
unset VALIDATED_CERTIFICATES
CERT_ERRORS=0
CERT_CHANGED=0
CERT_AMOUNT_CHANGED=0
declare -a SQL_DOMAIN_ARR
declare -a VALIDATED_CONFIG_DOMAINS
declare -a ADDITIONAL_VALIDATED_SAN
declare -a ADDITIONAL_WC_ARR
declare -a ADDITIONAL_SAN_ARR
declare -a VALIDATED_CERTIFICATES
IFS=',' read -r -a TMP_ARR <<< "${ADDITIONAL_SAN}"
for i in "${TMP_ARR[@]}" ; do
if [[ "$i" =~ \.\*$ ]]; then
ADDITIONAL_WC_ARR+=(${i::-2})
else
ADDITIONAL_SAN_ARR+=($i)
fi
done
if [[ ${AUTODISCOVER_SAN} == "y" ]]; then
# Fetch certs for autoconfig and autodiscover subdomains
ADDITIONAL_WC_ARR+=('autodiscover' 'autoconfig')
fi
if [[ ${SKIP_IP_CHECK} != "y" ]]; then
# Start IP detection
log_f "Detecting IP addresses..."
IPV4=$(get_ipv4)
IPV6=$(get_ipv6)
log_f "OK: ${IPV4}, ${IPV6:-"0000:0000:0000:0000:0000:0000:0000:0000"}"
fi
#########################################
# IP and webroot challenge verification #
SQL_DOMAINS=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT domain FROM domain WHERE backupmx=0 and active=1" -Bs)
if [[ ! $? -eq 0 ]]; then
log_f "Failed to read SQL domains, retrying in 1 minute..."
sleep 1m
exec $(readlink -f "$0")
fi
while read domains; do
if [[ -z "${domains}" ]]; then
# ignore empty lines
continue
fi
SQL_DOMAIN_ARR+=("${domains}")
done <<< "${SQL_DOMAINS}"
if [[ ${ONLY_MAILCOW_HOSTNAME} != "y" ]]; then
for SQL_DOMAIN in "${SQL_DOMAIN_ARR[@]}"; do
unset VALIDATED_CONFIG_DOMAINS_SUBDOMAINS
declare -a VALIDATED_CONFIG_DOMAINS_SUBDOMAINS
for SUBDOMAIN in "${ADDITIONAL_WC_ARR[@]}"; do
if [[ "${SUBDOMAIN}.${SQL_DOMAIN}" != "${MAILCOW_HOSTNAME}" ]]; then
if check_domain "${SUBDOMAIN}.${SQL_DOMAIN}"; then
VALIDATED_CONFIG_DOMAINS_SUBDOMAINS+=("${SUBDOMAIN}.${SQL_DOMAIN}")
fi
fi
done
VALIDATED_CONFIG_DOMAINS+=("${VALIDATED_CONFIG_DOMAINS_SUBDOMAINS[*]}")
done
fi
if check_domain ${MAILCOW_HOSTNAME}; then
VALIDATED_MAILCOW_HOSTNAME="${MAILCOW_HOSTNAME}"
fi
if [[ ${ONLY_MAILCOW_HOSTNAME} != "y" ]]; then
for SAN in "${ADDITIONAL_SAN_ARR[@]}"; do
# Skip on CAA errors for SAN
SAN_PARENT_DOMAIN=$(echo ${SAN} | cut -d. -f2-)
SAN_CAAS=( $(dig CAA ${SAN_PARENT_DOMAIN} +short | sed -n 's/\d issue "\(.*\)"/\1/p') )
if [[ ! -z ${SAN_CAAS} ]]; then
if [[ ${SAN_CAAS[@]} =~ "letsencrypt.org" ]]; then
log_f "Validated CAA for parent domain ${SAN_PARENT_DOMAIN} of ${SAN}"
else
log_f "Skipping ACME validation for ${SAN}: Lets Encrypt disallowed for ${SAN} by CAA record"
continue
fi
fi
if [[ ${SAN} == ${MAILCOW_HOSTNAME} ]]; then
continue
fi
if check_domain ${SAN}; then
ADDITIONAL_VALIDATED_SAN+=("${SAN}")
fi
done
fi
# Unique domains for server certificate
if [[ ${ENABLE_SSL_SNI} == "y" ]]; then
# create certificate for server name and fqdn SANs only
SERVER_SAN_VALIDATED=(${VALIDATED_MAILCOW_HOSTNAME} $(echo ${ADDITIONAL_VALIDATED_SAN[*]} | xargs -n1 | sort -u | xargs))
else
# create certificate for all domains, including all subdomains from other domains [*]
SERVER_SAN_VALIDATED=(${VALIDATED_MAILCOW_HOSTNAME} $(echo ${VALIDATED_CONFIG_DOMAINS[*]} ${ADDITIONAL_VALIDATED_SAN[*]} | xargs -n1 | sort -u | xargs))
fi
if [[ ! -z ${SERVER_SAN_VALIDATED[*]} ]]; then
CERT_NAME=${SERVER_SAN_VALIDATED[0]}
VALIDATED_CERTIFICATES+=("${CERT_NAME}")
# obtain server certificate if required
ACME_CONTACT_PARAMETER=${ACME_CONTACT_PARAMETER} DOMAINS=${SERVER_SAN_VALIDATED[@]} /srv/obtain-certificate.sh rsa
RETURN="$?"
if [[ "$RETURN" == "0" ]]; then # 0 = cert created successfully
CERT_AMOUNT_CHANGED=1
CERT_CHANGED=1
elif [[ "$RETURN" == "1" ]]; then # 1 = cert renewed successfully
CERT_CHANGED=1
elif [[ "$RETURN" == "2" ]]; then # 2 = cert not due for renewal
:
else
CERT_ERRORS=1
fi
# copy hostname certificate to default/server certificate
# do not a key when cert is missing, this can lead to a mismatch of cert/key
if [[ -f ${ACME_BASE}/${CERT_NAME}/cert.pem ]]; then
cp ${ACME_BASE}/${CERT_NAME}/cert.pem ${ACME_BASE}/cert.pem
cp ${ACME_BASE}/${CERT_NAME}/key.pem ${ACME_BASE}/key.pem
fi
fi
# individual certificates for SNI [@]
if [[ ${ENABLE_SSL_SNI} == "y" ]]; then
for VALIDATED_DOMAINS in "${VALIDATED_CONFIG_DOMAINS[@]}"; do
VALIDATED_DOMAINS_ARR=(${VALIDATED_DOMAINS})
unset VALIDATED_DOMAINS_SORTED
declare -a VALIDATED_DOMAINS_SORTED
VALIDATED_DOMAINS_SORTED=(${VALIDATED_DOMAINS_ARR[0]} $(echo ${VALIDATED_DOMAINS_ARR[@]:1} | xargs -n1 | sort -u | xargs))
# remove all domain names that are already inside the server certificate (SERVER_SAN_VALIDATED)
for domain in "${SERVER_SAN_VALIDATED[@]}"; do
for i in "${!VALIDATED_DOMAINS_SORTED[@]}"; do
if [[ ${VALIDATED_DOMAINS_SORTED[i]} = $domain ]]; then
unset 'VALIDATED_DOMAINS_SORTED[i]'
fi
done
done
if [[ ! -z ${VALIDATED_DOMAINS_SORTED[*]} ]]; then
CERT_NAME=${VALIDATED_DOMAINS_SORTED[0]}
VALIDATED_CERTIFICATES+=("${CERT_NAME}")
# obtain certificate if required
DOMAINS=${VALIDATED_DOMAINS_SORTED[@]} /srv/obtain-certificate.sh rsa
RETURN="$?"
if [[ "$RETURN" == "0" ]]; then # 0 = cert created successfully
CERT_AMOUNT_CHANGED=1
CERT_CHANGED=1
elif [[ "$RETURN" == "1" ]]; then # 1 = cert renewed successfully
CERT_CHANGED=1
elif [[ "$RETURN" == "2" ]]; then # 2 = cert not due for renewal
:
else
CERT_ERRORS=1
fi
fi
done
fi
if [[ -z ${VALIDATED_CERTIFICATES[*]} ]]; then
log_f "Cannot validate any hostnames, skipping Let's Encrypt for 1 hour."
log_f "Use SKIP_LETS_ENCRYPT=y in mailcow.conf to skip it permanently."
${REDIS_CMDLINE} SET ACME_FAIL_TIME "$(date +%s)"
sleep 1h
exec $(readlink -f "$0")
fi
# find orphaned certificates if no errors occurred
if [[ "${CERT_ERRORS}" == "0" ]]; then
for EXISTING_CERT in "${EXISTING_CERTS[@]}"; do
if [[ ! "`printf '_%s_\n' "${VALIDATED_CERTIFICATES[@]}"`" == *"_${EXISTING_CERT}_"* ]]; then
DATE=$(date +%Y-%m-%d_%H_%M_%S)
log_f "Found orphaned certificate: ${EXISTING_CERT} - archiving it at ${ACME_BASE}/backups/${EXISTING_CERT}/"
BACKUP_DIR=${ACME_BASE}/backups/${EXISTING_CERT}/${DATE}
# archive rsa cert and any other files
mkdir -p ${ACME_BASE}/backups/${EXISTING_CERT}
mv ${ACME_BASE}/${EXISTING_CERT} ${BACKUP_DIR}
CERT_CHANGED=1
CERT_AMOUNT_CHANGED=1
fi
done
fi
# reload on new or changed certificates
if [[ "${CERT_CHANGED}" == "1" ]]; then
rm -f "${ACME_BASE}/force_renew" 2> /dev/null
RELOAD_LOOP_C=1
while [[ "${POSTFIX_CERT_SERIAL}" == "${POSTFIX_CERT_SERIAL_NEW}" ]] || [[ "${DOVECOT_CERT_SERIAL}" == "${DOVECOT_CERT_SERIAL_NEW}" ]] || [[ ${#POSTFIX_CERT_SERIAL_NEW} -ne 36 ]] || [[ ${#DOVECOT_CERT_SERIAL_NEW} -ne 36 ]]; do
log_f "Reloading or restarting services... (${RELOAD_LOOP_C})"
RELOAD_LOOP_C=$((RELOAD_LOOP_C + 1))
CERT_AMOUNT_CHANGED=${CERT_AMOUNT_CHANGED} /srv/reload-configurations.sh
log_f "Waiting for containers to settle..."
sleep 10
until nc -z dovecot 143; do
sleep 1
done
until nc -z postfix 25; do
sleep 1
done
POSTFIX_CERT_SERIAL_NEW="$(echo | openssl s_client -connect postfix:25 -starttls smtp 2>/dev/null | openssl x509 -inform pem -noout -serial | cut -d "=" -f 2)"
DOVECOT_CERT_SERIAL_NEW="$(echo | openssl s_client -connect dovecot:143 -starttls imap 2>/dev/null | openssl x509 -inform pem -noout -serial | cut -d "=" -f 2)"
if [[ ${RELOAD_LOOP_C} -gt 3 ]]; then
log_f "Some services do return old end dates, something went wrong!"
${REDIS_CMDLINE} SET ACME_FAIL_TIME "$(date +%s)"
break;
fi
done
fi
case "$CERT_ERRORS" in
0) # all successful
if [[ "${CERT_CHANGED}" == "1" ]]; then
if [[ "${CERT_AMOUNT_CHANGED}" == "1" ]]; then
log_f "Certificates successfully requested and renewed where required, sleeping one day"
else
log_f "Certificates were successfully renewed where required, sleeping for another day."
fi
else
log_f "Certificates were successfully validated, no changes or renewals required, sleeping for another day."
fi
sleep 1d
;;
*) # non-zero
log_f "Some errors occurred, retrying in 30 minutes..."
${REDIS_CMDLINE} SET ACME_FAIL_TIME "$(date +%s)"
sleep 30m
exec $(readlink -f "$0")
;;
esac
done

131
data/Dockerfiles/acme/expand6.sh Executable file
View file

@ -0,0 +1,131 @@
#!/bin/bash
##################################################################################
#
# Copyright (C) 2017 Craig Miller
#
# See the file "LICENSE" for information on usage and redistribution
# of this file, and for a DISCLAIMER OF ALL WARRANTIES.
# Distributed under GPLv2 License
#
##################################################################################
# IPv6 Address Expansion functions
#
# by Craig Miller 19 Feb 2017
#
# 16 Nov 2017 v0.93 - added CLI functionality
VERSION=0.93
empty_addr="0000:0000:0000:0000:0000:0000:0000:0000"
empty_addr_len=${#empty_addr}
function usage {
echo " $0 - expand compressed IPv6 addresss "
echo " e.g. $0 2001:db8:1:12:123::456 "
echo " "
echo " -t self test"
echo " "
echo " By Craig Miller - Version: $VERSION"
exit 1
}
if [ "$1" == "-h" ]; then
#call help
usage
fi
#
# Expands IPv6 quibble to 4 digits with leading zeros e.g. db8 -> 0db8
#
# Returns string with expanded quibble
function expand_quibble() {
addr=$1
# create array of quibbles
addr_array=(${addr//:/ })
addr_array_len=${#addr_array[@]}
# step thru quibbles
for ((i=0; i< $addr_array_len ; i++ ))
do
quibble=${addr_array[$i]}
quibble_len=${#quibble}
case $quibble_len in
1) quibble="000$quibble";;
2) quibble="00$quibble";;
3) quibble="0$quibble";;
esac
addr_array[$i]=$quibble
done
# reconstruct addr from quibbles
return_str=${addr_array[*]}
return_str="${return_str// /:}"
echo $return_str
}
#
# Expands IPv6 address :: format to full zeros
#
# Returns string with expanded address
function expand() {
if [[ $1 == *"::"* ]]; then
# check for leading zeros on front_addr
if [[ $1 == "::"* ]]; then
front_addr=0
else
front_addr=$(echo $1 | sed -r 's;([^ ]+)::.*;\1;')
fi
# check for trailing zeros on back_addr
if [[ $1 == *"::" ]]; then
back_addr=0
else
back_addr=$(echo $1 | sed -r 's;.*::([^ ]+);\1;')
fi
front_addr=$(expand_quibble $front_addr)
back_addr=$(expand_quibble $back_addr)
new_addr=$empty_addr
front_addr_len=${#front_addr}
back_addr_len=${#back_addr}
# calculate fill needed
num_zeros=$(($empty_addr_len - $front_addr_len - $back_addr_len - 1))
#fill_str=${empty_addr[0]:0:$num_zeros}
new_addr="$front_addr:${empty_addr[0]:0:$num_zeros}$back_addr"
# return expanded address
echo $new_addr
else
# return input with expandd quibbles
expand_quibble $1
fi
}
# self test - call with '-t' parameter
if [ "$1" == "-t" ]; then
# add address examples to test
expand fd11::1d70:cf84:18ef:d056
expand 2a01::1
expand fe80::f203:8cff:fe3f:f041
expand 2001:db8:123::5
expand 2001:470:ebbd:0:f203:8cff:fe3f:f041
# special cases
expand ::1
expand fd32:197d:3022:1101::
exit 1
fi
# allow script to be sourced (with no arguements)
if [[ $1 != "" ]]; then
# validate input is an IPv6 address
if [[ $1 == *":"* ]]; then
expand $1
else
echo "ERROR: unregcognized IPv6 address $1"
exit 1
fi
fi

View file

@ -0,0 +1,132 @@
#!/bin/bash
log_f() {
if [[ ${2} == "no_nl" ]]; then
echo -n "$(date) - ${1}"
elif [[ ${2} == "no_date" ]]; then
echo "${1}"
elif [[ ${2} != "redis_only" ]]; then
echo "$(date) - ${1}"
fi
if [[ ${3} == "b64" ]]; then
${REDIS_CMDLINE} LPUSH ACME_LOG "{\"time\":\"$(date +%s)\",\"message\":\"base64,$(printf '%s' "${MAILCOW_HOSTNAME} - ${1}")\"}" > /dev/null
else
${REDIS_CMDLINE} LPUSH ACME_LOG "{\"time\":\"$(date +%s)\",\"message\":\"$(printf '%s' "${MAILCOW_HOSTNAME} - ${1}" | \
tr '%&;$"[]{}-\r\n' ' ')\"}" > /dev/null
fi
}
verify_email(){
regex="^(([A-Za-z0-9]+((\.|\-|\_|\+)?[A-Za-z0-9]?)*[A-Za-z0-9]+)|[A-Za-z0-9]+)@(([A-Za-z0-9]+)+((\.|\-|\_)?([A-Za-z0-9]+)+)*)+\.([A-Za-z]{2,})+$"
if [[ $1 =~ ${regex} ]]; then
return 0
else
return 1
fi
}
verify_hash_match(){
CERT_HASH=$(openssl x509 -in "${1}" -noout -pubkey | openssl md5)
KEY_HASH=$(openssl pkey -in "${2}" -pubout | openssl md5)
if [[ ${CERT_HASH} != ${KEY_HASH} ]]; then
log_f "Certificate and key hashes do not match!"
return 1
else
log_f "Verified hashes."
return 0
fi
}
get_ipv4(){
local IPV4=
local IPV4_SRCS=
local TRY=
IPV4_SRCS[0]="ip4.mailcow.email"
IPV4_SRCS[1]="ip4.nevondo.com"
until [[ ! -z ${IPV4} ]] || [[ ${TRY} -ge 10 ]]; do
IPV4=$(curl --connect-timeout 3 -m 10 -L4s ${IPV4_SRCS[$RANDOM % ${#IPV4_SRCS[@]} ]} | grep -E "^((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$")
[[ ! -z ${TRY} ]] && sleep 1
TRY=$((TRY+1))
done
echo ${IPV4}
}
get_ipv6(){
local IPV6=
local IPV6_SRCS=
local TRY=
IPV6_SRCS[0]="ip6.mailcow.email"
IPV6_SRCS[1]="ip6.nevondo.com"
until [[ ! -z ${IPV6} ]] || [[ ${TRY} -ge 10 ]]; do
IPV6=$(curl --connect-timeout 3 -m 10 -L6s ${IPV6_SRCS[$RANDOM % ${#IPV6_SRCS[@]} ]} | grep "^\([0-9a-fA-F]\{0,4\}:\)\{1,7\}[0-9a-fA-F]\{0,4\}$")
[[ ! -z ${TRY} ]] && sleep 1
TRY=$((TRY+1))
done
echo ${IPV6}
}
check_domain(){
DOMAIN=$1
A_DOMAIN=$(dig A ${DOMAIN} +short | tail -n 1)
AAAA_DOMAIN=$(dig AAAA ${DOMAIN} +short | tail -n 1)
# Hard-fail on CAA errors for MAILCOW_HOSTNAME
PARENT_DOMAIN=$(echo ${DOMAIN} | cut -d. -f2-)
CAAS=( $(dig CAA ${PARENT_DOMAIN} +short | sed -n 's/\d issue "\(.*\)"/\1/p') )
if [[ ! -z ${CAAS} ]]; then
if [[ ${CAAS[@]} =~ "letsencrypt.org" ]]; then
log_f "Validated CAA for parent domain ${PARENT_DOMAIN}"
else
log_f "Lets Encrypt disallowed for ${PARENT_DOMAIN} by CAA record"
return 1
fi
fi
# Check if CNAME without v6 enabled target
if [[ ! -z ${AAAA_DOMAIN} ]] && [[ -z $(echo ${AAAA_DOMAIN} | grep "^\([0-9a-fA-F]\{0,4\}:\)\{1,7\}[0-9a-fA-F]\{0,4\}$") ]]; then
AAAA_DOMAIN=
fi
if [[ ! -z ${AAAA_DOMAIN} ]]; then
log_f "Found AAAA record for ${DOMAIN}: ${AAAA_DOMAIN} - skipping A record check"
if [[ $(expand ${IPV6:-"0000:0000:0000:0000:0000:0000:0000:0000"}) == $(expand ${AAAA_DOMAIN}) ]] || [[ ${SKIP_IP_CHECK} == "y" ]] || [[ ${SNAT6_TO_SOURCE} != "n" ]]; then
if verify_challenge_path "${DOMAIN}" 6; then
log_f "Confirmed AAAA record with IP $(expand ${AAAA_DOMAIN})"
return 0
else
log_f "Confirmed AAAA record with IP $(expand ${AAAA_DOMAIN}), but HTTP validation failed"
fi
else
log_f "Cannot match your IP $(expand ${IPV6:-"0000:0000:0000:0000:0000:0000:0000:0000"}) against hostname ${DOMAIN} (DNS returned $(expand ${AAAA_DOMAIN}))"
fi
elif [[ ! -z ${A_DOMAIN} ]]; then
log_f "Found A record for ${DOMAIN}: ${A_DOMAIN}"
if [[ ${IPV4:-ERR} == ${A_DOMAIN} ]] || [[ ${SKIP_IP_CHECK} == "y" ]] || [[ ${SNAT_TO_SOURCE} != "n" ]]; then
if verify_challenge_path "${DOMAIN}" 4; then
log_f "Confirmed A record ${A_DOMAIN}"
return 0
else
log_f "Confirmed A record with IP ${A_DOMAIN}, but HTTP validation failed"
fi
else
log_f "Cannot match your IP ${IPV4} against hostname ${DOMAIN} (DNS returned ${A_DOMAIN})"
fi
else
log_f "No A or AAAA record found for hostname ${DOMAIN}"
fi
return 1
}
verify_challenge_path(){
if [[ ${SKIP_HTTP_VERIFICATION} == "y" ]]; then
echo '(skipping check, returning 0)'
return 0
fi
# verify_challenge_path URL 4|6
RANDOM_N=${RANDOM}${RANDOM}${RANDOM}
echo ${RANDOM_N} > /var/www/acme/${RANDOM_N}
if [[ "$(curl --insecure -${2} -L http://${1}/.well-known/acme-challenge/${RANDOM_N} --silent)" == "${RANDOM_N}" ]]; then
rm /var/www/acme/${RANDOM_N}
return 0
else
rm /var/www/acme/${RANDOM_N}
return 1
fi
}

View file

@ -0,0 +1,130 @@
#!/bin/bash
# Return values / exit codes
# 0 = cert created successfully
# 1 = cert renewed successfully
# 2 = cert not due for renewal
# * = errors
source /srv/functions.sh
CERT_DOMAINS=(${DOMAINS[@]})
CERT_DOMAIN=${CERT_DOMAINS[0]}
ACME_BASE=/var/lib/acme
TYPE=${1}
PREFIX=""
# only support rsa certificates for now
if [[ "${TYPE}" != "rsa" ]]; then
log_f "Unknown certificate type '${TYPE}' requested"
exit 5
fi
DOMAINS_FILE=${ACME_BASE}/${CERT_DOMAIN}/domains
CERT=${ACME_BASE}/${CERT_DOMAIN}/${PREFIX}cert.pem
SHARED_KEY=${ACME_BASE}/acme/${PREFIX}key.pem # must already exist
KEY=${ACME_BASE}/${CERT_DOMAIN}/${PREFIX}key.pem
CSR=${ACME_BASE}/${CERT_DOMAIN}/${PREFIX}acme.csr
if [[ -z ${CERT_DOMAINS[*]} ]]; then
log_f "Missing CERT_DOMAINS to obtain a certificate"
exit 3
fi
if [[ "${LE_STAGING}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
if [[ ! -z "${DIRECTORY_URL}" ]]; then
log_f "Cannot use DIRECTORY_URL with LE_STAGING=y - ignoring DIRECTORY_URL"
fi
log_f "Using Let's Encrypt staging servers"
DIRECTORY_URL='--directory-url https://acme-staging-v02.api.letsencrypt.org/directory'
elif [[ ! -z "${DIRECTORY_URL}" ]]; then
log_f "Using custom directory URL ${DIRECTORY_URL}"
DIRECTORY_URL="--directory-url ${DIRECTORY_URL}"
fi
if [[ -f ${DOMAINS_FILE} && "$(cat ${DOMAINS_FILE})" == "${CERT_DOMAINS[*]}" ]]; then
if [[ ! -f ${CERT} || ! -f "${KEY}" || -f "${ACME_BASE}/force_renew" ]]; then
log_f "Certificate ${CERT} doesn't exist yet or forced renewal - start obtaining"
# Certificate exists and did not change but could be due for renewal (30 days)
elif ! openssl x509 -checkend 2592000 -noout -in ${CERT} > /dev/null; then
log_f "Certificate ${CERT} is due for renewal (< 30 days) - start renewing"
else
log_f "Certificate ${CERT} validation done, neither changed nor due for renewal."
exit 2
fi
else
log_f "Certificate ${CERT} missing or changed domains '${CERT_DOMAINS[*]}' - start obtaining"
fi
# Make backup
if [[ -f ${CERT} ]]; then
DATE=$(date +%Y-%m-%d_%H_%M_%S)
BACKUP_DIR=${ACME_BASE}/backups/${CERT_DOMAIN}/${PREFIX}${DATE}
log_f "Creating backups in ${BACKUP_DIR} ..."
mkdir -p ${BACKUP_DIR}/
[[ -f ${DOMAINS_FILE} ]] && cp ${DOMAINS_FILE} ${BACKUP_DIR}/
[[ -f ${CERT} ]] && cp ${CERT} ${BACKUP_DIR}/
[[ -f ${KEY} ]] && cp ${KEY} ${BACKUP_DIR}/
[[ -f ${CSR} ]] && cp ${CSR} ${BACKUP_DIR}/
fi
mkdir -p ${ACME_BASE}/${CERT_DOMAIN}
if [[ ! -f ${KEY} ]]; then
log_f "Copying shared private key for this certificate..."
cp ${SHARED_KEY} ${KEY}
chmod 600 ${KEY}
fi
# Generating CSR
printf "[SAN]\nsubjectAltName=" > /tmp/_SAN
printf "DNS:%s," "${CERT_DOMAINS[@]}" >> /tmp/_SAN
sed -i '$s/,$//' /tmp/_SAN
openssl req -new -sha256 -key ${KEY} -subj "/" -reqexts SAN -config <(cat "$(openssl version -d | sed 's/.*"\(.*\)"/\1/g')/openssl.cnf" /tmp/_SAN) > ${CSR}
# acme-tiny writes info to stderr and ceritifcate to stdout
# The redirects will do the following:
# - redirect stdout to temp certificate file
# - redirect acme-tiny stderr to stdout (logs to variable ACME_RESPONSE)
# - tee stderr to get live output and log to dockerd
log_f "Checking resolver..."
until dig letsencrypt.org +time=3 +tries=1 @unbound > /dev/null; do
sleep 2
done
log_f "Resolver OK"
log_f "Using command acme-tiny ${DIRECTORY_URL} ${ACME_CONTACT_PARAMETER} --account-key ${ACME_BASE}/acme/account.pem --disable-check --csr ${CSR} --acme-dir /var/www/acme/"
ACME_RESPONSE=$(acme-tiny ${DIRECTORY_URL} ${ACME_CONTACT_PARAMETER} \
--account-key ${ACME_BASE}/acme/account.pem \
--disable-check \
--csr ${CSR} \
--acme-dir /var/www/acme/ 2>&1 > /tmp/_cert.pem | tee /dev/fd/5; exit ${PIPESTATUS[0]})
SUCCESS="$?"
ACME_RESPONSE_B64=$(echo "${ACME_RESPONSE}" | openssl enc -e -A -base64)
log_f "${ACME_RESPONSE_B64}" redis_only b64
case "$SUCCESS" in
0) # cert requested
log_f "Deploying certificate ${CERT}..."
# Deploy the new certificate and key
# Moving temp cert to {domain} folder
if verify_hash_match /tmp/_cert.pem ${KEY}; then
RETURN=0 # certificate created
if [[ -f ${CERT} ]]; then
RETURN=1 # certificate renewed
fi
mv -f /tmp/_cert.pem ${CERT}
echo -n ${CERT_DOMAINS[*]} > ${DOMAINS_FILE}
rm /var/www/acme/* 2> /dev/null
log_f "Certificate successfully obtained"
exit ${RETURN}
else
log_f "Certificate was successfully requested, but key and certificate have non-matching hashes, ignoring certificate"
exit 4
fi
;;
*) # non-zero is non-fun
log_f "Failed to obtain certificate ${CERT} for domains '${CERT_DOMAINS[*]}'"
redis-cli -h redis SET ACME_FAIL_TIME "$(date +%s)"
exit 100${SUCCESS}
;;
esac

View file

@ -0,0 +1,45 @@
#!/bin/bash
# Reading container IDs
# Wrapping as array to ensure trimmed content when calling $NGINX etc.
NGINX=($(curl --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" | jq -rc "select( .name | tostring | contains(\"nginx-mailcow\")) | select( .project | tostring | contains(\"${COMPOSE_PROJECT_NAME,,}\")) | .id" | tr "\n" " "))
DOVECOT=($(curl --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" | jq -rc "select( .name | tostring | contains(\"dovecot-mailcow\")) | select( .project | tostring | contains(\"${COMPOSE_PROJECT_NAME,,}\")) | .id" | tr "\n" " "))
POSTFIX=($(curl --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" | jq -rc "select( .name | tostring | contains(\"postfix-mailcow\")) | select( .project | tostring | contains(\"${COMPOSE_PROJECT_NAME,,}\")) | .id" | tr "\n" " "))
reload_nginx(){
echo "Reloading Nginx..."
NGINX_RELOAD_RET=$(curl -X POST --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${NGINX}/exec -d '{"cmd":"reload", "task":"nginx"}' --silent -H 'Content-type: application/json' | jq -r .type)
[[ ${NGINX_RELOAD_RET} != 'success' ]] && { echo "Could not reload Nginx, restarting container..."; restart_container ${NGINX} ; }
}
reload_dovecot(){
echo "Reloading Dovecot..."
DOVECOT_RELOAD_RET=$(curl -X POST --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${DOVECOT}/exec -d '{"cmd":"reload", "task":"dovecot"}' --silent -H 'Content-type: application/json' | jq -r .type)
[[ ${DOVECOT_RELOAD_RET} != 'success' ]] && { echo "Could not reload Dovecot, restarting container..."; restart_container ${DOVECOT} ; }
}
reload_postfix(){
echo "Reloading Postfix..."
POSTFIX_RELOAD_RET=$(curl -X POST --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${POSTFIX}/exec -d '{"cmd":"reload", "task":"postfix"}' --silent -H 'Content-type: application/json' | jq -r .type)
[[ ${POSTFIX_RELOAD_RET} != 'success' ]] && { echo "Could not reload Postfix, restarting container..."; restart_container ${POSTFIX} ; }
}
restart_container(){
for container in $*; do
echo "Restarting ${container}..."
C_REST_OUT=$(curl -X POST --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${container}/restart --silent | jq -r '.msg')
echo "${C_REST_OUT}"
done
}
if [[ "${CERT_AMOUNT_CHANGED}" == "1" ]]; then
restart_container ${NGINX}
restart_container ${DOVECOT}
restart_container ${POSTFIX}
else
reload_nginx
#reload_dovecot
restart_container ${DOVECOT}
#reload_postfix
restart_container ${POSTFIX}
fi

View file

@ -0,0 +1,3 @@
FROM debian:bookworm-slim
RUN apt update && apt install pigz

View file

@ -0,0 +1,25 @@
FROM alpine:3.20
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
RUN apk upgrade --no-cache \
&& apk add --update --no-cache \
rsync \
clamav \
bind-tools \
bash \
tini
# init
COPY clamd.sh /clamd.sh
RUN chmod +x /sbin/tini
# healthcheck
COPY healthcheck.sh /healthcheck.sh
COPY clamdcheck.sh /usr/local/bin
RUN chmod +x /healthcheck.sh
RUN chmod +x /usr/local/bin/clamdcheck.sh
HEALTHCHECK --start-period=6m CMD "/healthcheck.sh"
ENTRYPOINT []
CMD ["/sbin/tini", "-g", "--", "/clamd.sh"]

105
data/Dockerfiles/clamd/clamd.sh Executable file
View file

@ -0,0 +1,105 @@
#!/bin/bash
if [[ "${SKIP_CLAMD}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "SKIP_CLAMD=y, skipping ClamAV..."
sleep 365d
exit 0
fi
# Cleaning up garbage
echo "Cleaning up tmp files..."
rm -rf /var/lib/clamav/clamav-*.tmp
# Prepare whitelist
mkdir -p /run/clamav /var/lib/clamav
if [[ -s /etc/clamav/whitelist.ign2 ]]; then
echo "Copying non-empty whitelist.ign2 to /var/lib/clamav/whitelist.ign2"
cp /etc/clamav/whitelist.ign2 /var/lib/clamav/whitelist.ign2
fi
if [[ ! -f /var/lib/clamav/whitelist.ign2 ]]; then
echo "Creating /var/lib/clamav/whitelist.ign2"
cat <<EOF > /var/lib/clamav/whitelist.ign2
# Please restart ClamAV after changing signatures
Example-Signature.Ignore-1
PUA.Win.Trojan.EmbeddedPDF-1
PUA.Pdf.Trojan.EmbeddedJavaScript-1
PUA.Pdf.Trojan.OpenActionObjectwithJavascript-1
EOF
fi
chown clamav:clamav -R /var/lib/clamav /run/clamav
chmod 755 /var/lib/clamav
chmod 644 -R /var/lib/clamav/*
chmod 750 /run/clamav
stat /var/lib/clamav/whitelist.ign2
dos2unix /var/lib/clamav/whitelist.ign2
sed -i '/^\s*$/d' /var/lib/clamav/whitelist.ign2
# Copying to /etc/clamav to expose file as-is to administrator
cp -p /var/lib/clamav/whitelist.ign2 /etc/clamav/whitelist.ign2
BACKGROUND_TASKS=()
echo "Running freshclam..."
freshclam
(
while true; do
sleep 12600
freshclam
done
) &
BACKGROUND_TASKS+=($!)
(
while true; do
sleep 10m
SANE_MIRRORS="$(dig +ignore +short rsync.sanesecurity.net)"
for sane_mirror in ${SANE_MIRRORS}; do
CE=
rsync -avp --chown=clamav:clamav --chmod=Du=rwx,Dgo=rx,Fu=rw,Fog=r --timeout=5 rsync://${sane_mirror}/sanesecurity/ \
--include 'blurl.ndb' \
--include 'junk.ndb' \
--include 'jurlbl.ndb' \
--include 'jurbla.ndb' \
--include 'phishtank.ndb' \
--include 'phish.ndb' \
--include 'spamimg.hdb' \
--include 'scam.ndb' \
--include 'rogue.hdb' \
--include 'sanesecurity.ftm' \
--include 'sigwhitelist.ign2' \
--exclude='*' /var/lib/clamav/
CE=$?
chmod 755 /var/lib/clamav/
if [ ${CE} -eq 0 ]; then
while [ ! -z "$(pidof freshclam)" ]; do
echo "Freshclam is active, waiting..."
sleep 5
done
echo RELOAD | nc clamd-mailcow 3310
break
fi
done
sleep 12h
done
) &
BACKGROUND_TASKS+=($!)
nice -n10 clamd &
BACKGROUND_TASKS+=($!)
while true; do
for bg_task in ${BACKGROUND_TASKS[*]}; do
if ! kill -0 ${bg_task} 1>&2; then
echo "Worker ${bg_task} died, stopping container waiting for respawn..."
kill -TERM 1
fi
sleep 10
done
done

View file

@ -0,0 +1,14 @@
#!/bin/sh
set -eu
if [ "${CLAMAV_NO_CLAMD:-}" != "false" ]; then
if [ "$(echo "PING" | nc localhost 3310)" != "PONG" ]; then
echo "ERROR: Unable to contact server"
exit 1
fi
echo "Clamd is up"
fi
exit 0

View file

@ -0,0 +1,9 @@
#!/bin/bash
if [[ "${SKIP_CLAMD}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "SKIP_CLAMD=y, skipping ClamAV..."
exit 0
fi
# run clamd healthcheck
/usr/local/bin/clamdcheck.sh

View file

@ -0,0 +1,27 @@
FROM alpine:3.20
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
ARG PIP_BREAK_SYSTEM_PACKAGES=1
WORKDIR /app
RUN apk add --update --no-cache python3 \
py3-pip \
openssl \
tzdata \
py3-psutil \
py3-redis \
py3-async-timeout \
&& pip3 install --upgrade pip \
fastapi \
uvicorn \
aiodocker \
docker
RUN mkdir /app/modules
COPY docker-entrypoint.sh /app/
COPY main.py /app/main.py
COPY modules/ /app/modules/
ENTRYPOINT ["/bin/sh", "/app/docker-entrypoint.sh"]
CMD ["python", "main.py"]

View file

@ -0,0 +1,9 @@
#!/bin/bash
`openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes \
-keyout /app/dockerapi_key.pem \
-out /app/dockerapi_cert.pem \
-subj /CN=dockerapi/O=mailcow \
-addext subjectAltName=DNS:dockerapi`
exec "$@"

View file

@ -0,0 +1,261 @@
import os
import sys
import uvicorn
import json
import uuid
import async_timeout
import asyncio
import aiodocker
import docker
import logging
from logging.config import dictConfig
from fastapi import FastAPI, Response, Request
from modules.DockerApi import DockerApi
from redis import asyncio as aioredis
from contextlib import asynccontextmanager
dockerapi = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global dockerapi
# Initialize a custom logger
logger = logging.getLogger("dockerapi")
logger.setLevel(logging.INFO)
# Configure the logger to output logs to the terminal
handler = logging.StreamHandler()
handler.setLevel(logging.INFO)
formatter = logging.Formatter("%(levelname)s: %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info("Init APP")
# Init redis client
if os.environ['REDIS_SLAVEOF_IP'] != "":
redis_client = redis = await aioredis.from_url(f"redis://{os.environ['REDIS_SLAVEOF_IP']}:{os.environ['REDIS_SLAVEOF_PORT']}/0")
else:
redis_client = redis = await aioredis.from_url("redis://redis-mailcow:6379/0")
# Init docker clients
sync_docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock', version='auto')
async_docker_client = aiodocker.Docker(url='unix:///var/run/docker.sock')
dockerapi = DockerApi(redis_client, sync_docker_client, async_docker_client, logger)
logger.info("Subscribe to redis channel")
# Subscribe to redis channel
dockerapi.pubsub = redis.pubsub()
await dockerapi.pubsub.subscribe("MC_CHANNEL")
asyncio.create_task(handle_pubsub_messages(dockerapi.pubsub))
yield
# Close docker connections
dockerapi.sync_docker_client.close()
await dockerapi.async_docker_client.close()
# Close redis
await dockerapi.pubsub.unsubscribe("MC_CHANNEL")
await dockerapi.redis_client.close()
app = FastAPI(lifespan=lifespan)
# Define Routes
@app.get("/host/stats")
async def get_host_update_stats():
global dockerapi
if dockerapi.host_stats_isUpdating == False:
asyncio.create_task(dockerapi.get_host_stats())
dockerapi.host_stats_isUpdating = True
while True:
if await dockerapi.redis_client.exists('host_stats'):
break
await asyncio.sleep(1.5)
stats = json.loads(await dockerapi.redis_client.get('host_stats'))
return Response(content=json.dumps(stats, indent=4), media_type="application/json")
@app.get("/containers/{container_id}/json")
async def get_container(container_id : str):
global dockerapi
if container_id and container_id.isalnum():
try:
for container in (await dockerapi.async_docker_client.containers.list()):
if container._id == container_id:
container_info = await container.show()
return Response(content=json.dumps(container_info, indent=4), media_type="application/json")
res = {
"type": "danger",
"msg": "no container found"
}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
except Exception as e:
res = {
"type": "danger",
"msg": str(e)
}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
res = {
"type": "danger",
"msg": "no or invalid id defined"
}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
@app.get("/containers/json")
async def get_containers():
global dockerapi
containers = {}
try:
for container in (await dockerapi.async_docker_client.containers.list()):
container_info = await container.show()
containers.update({container_info['Id']: container_info})
return Response(content=json.dumps(containers, indent=4), media_type="application/json")
except Exception as e:
res = {
"type": "danger",
"msg": str(e)
}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
@app.post("/containers/{container_id}/{post_action}")
async def post_containers(container_id : str, post_action : str, request: Request):
global dockerapi
try :
request_json = await request.json()
except Exception as err:
request_json = {}
if container_id and container_id.isalnum() and post_action:
try:
"""Dispatch container_post api call"""
if post_action == 'exec':
if not request_json or not 'cmd' in request_json:
res = {
"type": "danger",
"msg": "cmd is missing"
}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
if not request_json or not 'task' in request_json:
res = {
"type": "danger",
"msg": "task is missing"
}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
api_call_method_name = '__'.join(['container_post', str(post_action), str(request_json['cmd']), str(request_json['task']) ])
else:
api_call_method_name = '__'.join(['container_post', str(post_action) ])
api_call_method = getattr(dockerapi, api_call_method_name, lambda container_id: Response(content=json.dumps({'type': 'danger', 'msg':'container_post - unknown api call' }, indent=4), media_type="application/json"))
dockerapi.logger.info("api call: %s, container_id: %s" % (api_call_method_name, container_id))
return api_call_method(request_json, container_id=container_id)
except Exception as e:
dockerapi.logger.error("error - container_post: %s" % str(e))
res = {
"type": "danger",
"msg": str(e)
}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
res = {
"type": "danger",
"msg": "invalid container id or missing action"
}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
@app.post("/container/{container_id}/stats/update")
async def post_container_update_stats(container_id : str):
global dockerapi
# start update task for container if no task is running
if container_id not in dockerapi.containerIds_to_update:
asyncio.create_task(dockerapi.get_container_stats(container_id))
dockerapi.containerIds_to_update.append(container_id)
while True:
if await dockerapi.redis_client.exists(container_id + '_stats'):
break
await asyncio.sleep(1.5)
stats = json.loads(await dockerapi.redis_client.get(container_id + '_stats'))
return Response(content=json.dumps(stats, indent=4), media_type="application/json")
# PubSub Handler
async def handle_pubsub_messages(channel: aioredis.client.PubSub):
global dockerapi
while True:
try:
async with async_timeout.timeout(60):
message = await channel.get_message(ignore_subscribe_messages=True, timeout=30)
if message is not None:
# Parse message
data_json = json.loads(message['data'].decode('utf-8'))
dockerapi.logger.info(f"PubSub Received - {json.dumps(data_json)}")
# Handle api_call
if 'api_call' in data_json:
# api_call: container_post
if data_json['api_call'] == "container_post":
if 'post_action' in data_json and 'container_name' in data_json:
try:
"""Dispatch container_post api call"""
request_json = {}
if data_json['post_action'] == 'exec':
if 'request' in data_json:
request_json = data_json['request']
if 'cmd' in request_json:
if 'task' in request_json:
api_call_method_name = '__'.join(['container_post', str(data_json['post_action']), str(request_json['cmd']), str(request_json['task']) ])
else:
dockerapi.logger.error("api call: task missing")
else:
dockerapi.logger.error("api call: cmd missing")
else:
dockerapi.logger.error("api call: request missing")
else:
api_call_method_name = '__'.join(['container_post', str(data_json['post_action'])])
if api_call_method_name:
api_call_method = getattr(dockerapi, api_call_method_name)
if api_call_method:
dockerapi.logger.info("api call: %s, container_name: %s" % (api_call_method_name, data_json['container_name']))
api_call_method(request_json, container_name=data_json['container_name'])
else:
dockerapi.logger.error("api call not found: %s, container_name: %s" % (api_call_method_name, data_json['container_name']))
except Exception as e:
dockerapi.logger.error("container_post: %s" % str(e))
else:
dockerapi.logger.error("api call: missing container_name, post_action or request")
else:
dockerapi.logger.error("Unknwon PubSub recieved - %s" % json.dumps(data_json))
else:
dockerapi.logger.error("Unknwon PubSub recieved - %s" % json.dumps(data_json))
await asyncio.sleep(0.0)
except asyncio.TimeoutError:
pass
if __name__ == '__main__':
uvicorn.run(
app,
host="0.0.0.0",
port=443,
ssl_certfile="/app/dockerapi_cert.pem",
ssl_keyfile="/app/dockerapi_key.pem",
log_level="info",
loop="none"
)

View file

@ -0,0 +1,487 @@
import psutil
import sys
import os
import re
import time
import json
import asyncio
import platform
from datetime import datetime
from fastapi import FastAPI, Response, Request
class DockerApi:
def __init__(self, redis_client, sync_docker_client, async_docker_client, logger):
self.redis_client = redis_client
self.sync_docker_client = sync_docker_client
self.async_docker_client = async_docker_client
self.logger = logger
self.host_stats_isUpdating = False
self.containerIds_to_update = []
# api call: container_post - post_action: stop
def container_post__stop(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(all=True, filters=filters):
container.stop()
res = { 'type': 'success', 'msg': 'command completed successfully'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: start
def container_post__start(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(all=True, filters=filters):
container.start()
res = { 'type': 'success', 'msg': 'command completed successfully'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: restart
def container_post__restart(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(all=True, filters=filters):
container.restart()
res = { 'type': 'success', 'msg': 'command completed successfully'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: top
def container_post__top(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(all=True, filters=filters):
res = { 'type': 'success', 'msg': container.top()}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: stats
def container_post__stats(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(all=True, filters=filters):
for stat in container.stats(decode=True, stream=True):
res = { 'type': 'success', 'msg': stat}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: mailq - task: delete
def container_post__exec__mailq__delete(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
flagged_qids = ['-d %s' % i for i in filtered_qids]
sanitized_string = str(' '.join(flagged_qids))
for container in self.sync_docker_client.containers.list(filters=filters):
postsuper_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
return self.exec_run_handler('generic', postsuper_r)
# api call: container_post - post_action: exec - cmd: mailq - task: hold
def container_post__exec__mailq__hold(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
flagged_qids = ['-h %s' % i for i in filtered_qids]
sanitized_string = str(' '.join(flagged_qids))
for container in self.sync_docker_client.containers.list(filters=filters):
postsuper_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
return self.exec_run_handler('generic', postsuper_r)
# api call: container_post - post_action: exec - cmd: mailq - task: cat
def container_post__exec__mailq__cat(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
sanitized_string = str(' '.join(filtered_qids))
for container in self.sync_docker_client.containers.list(filters=filters):
postcat_return = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postcat -q " + sanitized_string], user='postfix')
if not postcat_return:
postcat_return = 'err: invalid'
return self.exec_run_handler('utf8_text_only', postcat_return)
# api call: container_post - post_action: exec - cmd: mailq - task: unhold
def container_post__exec__mailq__unhold(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
flagged_qids = ['-H %s' % i for i in filtered_qids]
sanitized_string = str(' '.join(flagged_qids))
for container in self.sync_docker_client.containers.list(filters=filters):
postsuper_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
return self.exec_run_handler('generic', postsuper_r)
# api call: container_post - post_action: exec - cmd: mailq - task: deliver
def container_post__exec__mailq__deliver(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
flagged_qids = ['-i %s' % i for i in filtered_qids]
for container in self.sync_docker_client.containers.list(filters=filters):
for i in flagged_qids:
postqueue_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postqueue " + i], user='postfix')
# todo: check each exit code
res = { 'type': 'success', 'msg': 'Scheduled immediate delivery'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: mailq - task: list
def container_post__exec__mailq__list(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
mailq_return = container.exec_run(["/usr/sbin/postqueue", "-j"], user='postfix')
return self.exec_run_handler('utf8_text_only', mailq_return)
# api call: container_post - post_action: exec - cmd: mailq - task: flush
def container_post__exec__mailq__flush(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
postqueue_r = container.exec_run(["/usr/sbin/postqueue", "-f"], user='postfix')
return self.exec_run_handler('generic', postqueue_r)
# api call: container_post - post_action: exec - cmd: mailq - task: super_delete
def container_post__exec__mailq__super_delete(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
postsuper_r = container.exec_run(["/usr/sbin/postsuper", "-d", "ALL"])
return self.exec_run_handler('generic', postsuper_r)
# api call: container_post - post_action: exec - cmd: system - task: fts_rescan
def container_post__exec__system__fts_rescan(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'username' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
rescan_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/doveadm fts rescan -u '" + request_json['username'].replace("'", "'\\''") + "'"], user='vmail')
if rescan_return.exit_code == 0:
res = { 'type': 'success', 'msg': 'fts_rescan: rescan triggered'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
res = { 'type': 'warning', 'msg': 'fts_rescan error'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
if 'all' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
rescan_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/doveadm fts rescan -A"], user='vmail')
if rescan_return.exit_code == 0:
res = { 'type': 'success', 'msg': 'fts_rescan: rescan triggered'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
res = { 'type': 'warning', 'msg': 'fts_rescan error'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: system - task: df
def container_post__exec__system__df(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'dir' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
df_return = container.exec_run(["/bin/bash", "-c", "/bin/df -H '" + request_json['dir'].replace("'", "'\\''") + "' | /usr/bin/tail -n1 | /usr/bin/tr -s [:blank:] | /usr/bin/tr ' ' ','"], user='nobody')
if df_return.exit_code == 0:
return df_return.output.decode('utf-8').rstrip()
else:
return "0,0,0,0,0,0"
# api call: container_post - post_action: exec - cmd: system - task: mysql_upgrade
def container_post__exec__system__mysql_upgrade(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
sql_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/mysql_upgrade -uroot -p'" + os.environ['DBROOT'].replace("'", "'\\''") + "'\n"], user='mysql')
if sql_return.exit_code == 0:
matched = False
for line in sql_return.output.decode('utf-8').split("\n"):
if 'is already upgraded to' in line:
matched = True
if matched:
res = { 'type': 'success', 'msg':'mysql_upgrade: already upgraded', 'text': sql_return.output.decode('utf-8')}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
container.restart()
res = { 'type': 'warning', 'msg':'mysql_upgrade: upgrade was applied', 'text': sql_return.output.decode('utf-8')}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
res = { 'type': 'error', 'msg': 'mysql_upgrade: error running command', 'text': sql_return.output.decode('utf-8')}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: system - task: mysql_tzinfo_to_sql
def container_post__exec__system__mysql_tzinfo_to_sql(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
sql_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/mysql_tzinfo_to_sql /usr/share/zoneinfo | /bin/sed 's/Local time zone must be set--see zic manual page/FCTY/' | /usr/bin/mysql -uroot -p'" + os.environ['DBROOT'].replace("'", "'\\''") + "' mysql \n"], user='mysql')
if sql_return.exit_code == 0:
res = { 'type': 'info', 'msg': 'mysql_tzinfo_to_sql: command completed successfully', 'text': sql_return.output.decode('utf-8')}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
res = { 'type': 'error', 'msg': 'mysql_tzinfo_to_sql: error running command', 'text': sql_return.output.decode('utf-8')}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: reload - task: dovecot
def container_post__exec__reload__dovecot(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
reload_return = container.exec_run(["/bin/bash", "-c", "/usr/sbin/dovecot reload"])
return self.exec_run_handler('generic', reload_return)
# api call: container_post - post_action: exec - cmd: reload - task: postfix
def container_post__exec__reload__postfix(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
reload_return = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postfix reload"])
return self.exec_run_handler('generic', reload_return)
# api call: container_post - post_action: exec - cmd: reload - task: nginx
def container_post__exec__reload__nginx(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
reload_return = container.exec_run(["/bin/sh", "-c", "/usr/sbin/nginx -s reload"])
return self.exec_run_handler('generic', reload_return)
# api call: container_post - post_action: exec - cmd: sieve - task: list
def container_post__exec__sieve__list(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'username' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
sieve_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/doveadm sieve list -u '" + request_json['username'].replace("'", "'\\''") + "'"])
return self.exec_run_handler('utf8_text_only', sieve_return)
# api call: container_post - post_action: exec - cmd: sieve - task: print
def container_post__exec__sieve__print(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'username' in request_json and 'script_name' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
cmd = ["/bin/bash", "-c", "/usr/bin/doveadm sieve get -u '" + request_json['username'].replace("'", "'\\''") + "' '" + request_json['script_name'].replace("'", "'\\''") + "'"]
sieve_return = container.exec_run(cmd)
return self.exec_run_handler('utf8_text_only', sieve_return)
# api call: container_post - post_action: exec - cmd: maildir - task: cleanup
def container_post__exec__maildir__cleanup(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'maildir' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
sane_name = re.sub(r'\W+', '', request_json['maildir'])
vmail_name = request_json['maildir'].replace("'", "'\\''")
cmd_vmail = "if [[ -d '/var/vmail/" + vmail_name + "' ]]; then /bin/mv '/var/vmail/" + vmail_name + "' '/var/vmail/_garbage/" + str(int(time.time())) + "_" + sane_name + "'; fi"
index_name = request_json['maildir'].split("/")
if len(index_name) > 1:
index_name = index_name[1].replace("'", "'\\''") + "@" + index_name[0].replace("'", "'\\''")
cmd_vmail_index = "if [[ -d '/var/vmail_index/" + index_name + "' ]]; then /bin/mv '/var/vmail_index/" + index_name + "' '/var/vmail/_garbage/" + str(int(time.time())) + "_" + sane_name + "_index'; fi"
cmd = ["/bin/bash", "-c", cmd_vmail + " && " + cmd_vmail_index]
else:
cmd = ["/bin/bash", "-c", cmd_vmail]
maildir_cleanup = container.exec_run(cmd, user='vmail')
return self.exec_run_handler('generic', maildir_cleanup)
# api call: container_post - post_action: exec - cmd: rspamd - task: worker_password
def container_post__exec__rspamd__worker_password(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'raw' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
cmd = "/usr/bin/rspamadm pw -e -p '" + request_json['raw'].replace("'", "'\\''") + "' 2> /dev/null"
cmd_response = self.exec_cmd_container(container, cmd, user="_rspamd")
matched = False
for line in cmd_response.split("\n"):
if '$2$' in line:
hash = line.strip()
hash_out = re.search(r'\$2\$.+$', hash).group(0)
rspamd_passphrase_hash = re.sub(r'[^0-9a-zA-Z\$]+', '', hash_out.rstrip())
rspamd_password_filename = "/etc/rspamd/override.d/worker-controller-password.inc"
cmd = '''/bin/echo 'enable_password = "%s";' > %s && cat %s''' % (rspamd_passphrase_hash, rspamd_password_filename, rspamd_password_filename)
cmd_response = self.exec_cmd_container(container, cmd, user="_rspamd")
if rspamd_passphrase_hash.startswith("$2$") and rspamd_passphrase_hash in cmd_response:
container.restart()
matched = True
if matched:
res = { 'type': 'success', 'msg': 'command completed successfully' }
self.logger.info('success changing Rspamd password')
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
self.logger.error('failed changing Rspamd password')
res = { 'type': 'danger', 'msg': 'command did not complete' }
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# Collect host stats
async def get_host_stats(self, wait=5):
try:
system_time = datetime.now()
host_stats = {
"cpu": {
"cores": psutil.cpu_count(),
"usage": psutil.cpu_percent()
},
"memory": {
"total": psutil.virtual_memory().total,
"usage": psutil.virtual_memory().percent,
"swap": psutil.swap_memory()
},
"uptime": time.time() - psutil.boot_time(),
"system_time": system_time.strftime("%d.%m.%Y %H:%M:%S"),
"architecture": platform.machine()
}
await self.redis_client.set('host_stats', json.dumps(host_stats), ex=10)
except Exception as e:
res = {
"type": "danger",
"msg": str(e)
}
await asyncio.sleep(wait)
self.host_stats_isUpdating = False
# Collect container stats
async def get_container_stats(self, container_id, wait=5, stop=False):
if container_id and container_id.isalnum():
try:
for container in (await self.async_docker_client.containers.list()):
if container._id == container_id:
res = await container.stats(stream=False)
if await self.redis_client.exists(container_id + '_stats'):
stats = json.loads(await self.redis_client.get(container_id + '_stats'))
else:
stats = []
stats.append(res[0])
if len(stats) > 3:
del stats[0]
await self.redis_client.set(container_id + '_stats', json.dumps(stats), ex=60)
except Exception as e:
res = {
"type": "danger",
"msg": str(e)
}
else:
res = {
"type": "danger",
"msg": "no or invalid id defined"
}
await asyncio.sleep(wait)
if stop == True:
# update task was called second time, stop
self.containerIds_to_update.remove(container_id)
else:
# call update task a second time
await self.get_container_stats(container_id, wait=0, stop=True)
def exec_cmd_container(self, container, cmd, user, timeout=2, shell_cmd="/bin/bash"):
def recv_socket_data(c_socket, timeout):
c_socket.setblocking(0)
total_data=[]
data=''
begin=time.time()
while True:
if total_data and time.time()-begin > timeout:
break
elif time.time()-begin > timeout*2:
break
try:
data = c_socket.recv(8192)
if data:
total_data.append(data.decode('utf-8'))
#change the beginning time for measurement
begin=time.time()
else:
#sleep for sometime to indicate a gap
time.sleep(0.1)
break
except:
pass
return ''.join(total_data)
try :
socket = container.exec_run([shell_cmd], stdin=True, socket=True, user=user).output._sock
if not cmd.endswith("\n"):
cmd = cmd + "\n"
socket.send(cmd.encode('utf-8'))
data = recv_socket_data(socket, timeout)
socket.close()
return data
except Exception as e:
self.logger.error("error - exec_cmd_container: %s" % str(e))
traceback.print_exc(file=sys.stdout)
def exec_run_handler(self, type, output):
if type == 'generic':
if output.exit_code == 0:
res = { 'type': 'success', 'msg': 'command completed successfully' }
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
res = { 'type': 'danger', 'msg': 'command failed: ' + output.output.decode('utf-8') }
return Response(content=json.dumps(res, indent=4), media_type="application/json")
if type == 'utf8_text_only':
return Response(content=output.output.decode('utf-8'), media_type="text/plain")

View file

View file

@ -0,0 +1,136 @@
FROM alpine:3.20
LABEL maintainer="The Infrastructure Company GmbH <info@servercow.de>"
# renovate: datasource=github-releases depName=tianon/gosu versioning=semver-coerced extractVersion=^(?<version>.*)$
ARG GOSU_VERSION=1.16
ENV LANG=C.UTF-8
ENV LC_ALL=C.UTF-8
# Add groups and users before installing Dovecot to not break compatibility
RUN addgroup -g 5000 vmail \
&& addgroup -g 401 dovecot \
&& addgroup -g 402 dovenull \
&& sed -i "s/999/99/" /etc/group \
&& addgroup -g 999 sogo \
&& addgroup nobody sogo \
&& adduser -D -u 5000 -G vmail -h /var/vmail vmail \
&& adduser -D -G dovecot -u 401 -h /dev/null -s /sbin/nologin dovecot \
&& adduser -D -G dovenull -u 402 -h /dev/null -s /sbin/nologin dovenull \
&& apk add --no-cache --update \
bash \
bind-tools \
findutils \
envsubst \
ca-certificates \
curl \
coreutils \
jq \
lua \
lua-cjson \
lua-socket \
lua-sql-mysql \
lua5.3-sql-mysql \
icu-data-full \
mariadb-connector-c \
gcompat \
mariadb-client \
perl \
perl-ntlm \
perl-cgi \
perl-crypt-openssl-rsa \
perl-utils \
perl-crypt-ssleay \
perl-data-uniqid \
perl-dbd-mysql \
perl-dbi \
perl-digest-hmac \
perl-dist-checkconflicts \
perl-encode-imaputf7 \
perl-file-copy-recursive \
perl-file-tail \
perl-io-socket-inet6 \
perl-io-gzip \
perl-io-socket-ssl \
perl-io-tee \
perl-ipc-run \
perl-json-webtoken \
perl-mail-imapclient \
perl-module-implementation \
perl-module-scandeps \
perl-net-ssleay \
perl-package-stash \
perl-package-stash-xs \
perl-par-packer \
perl-parse-recdescent \
perl-lockfile-simple \
libproc \
perl-readonly \
perl-regexp-common \
perl-sys-meminfo \
perl-term-readkey \
perl-test-deep \
perl-test-fatal \
perl-test-mockobject \
perl-test-mock-guard \
perl-test-pod \
perl-test-requires \
perl-test-simple \
perl-test-warn \
perl-try-tiny \
perl-unicode-string \
perl-proc-processtable \
perl-app-cpanminus \
procps \
python3 \
py3-mysqlclient \
py3-html2text \
py3-jinja2 \
py3-redis \
redis \
syslog-ng \
syslog-ng-redis \
syslog-ng-json \
supervisor \
tzdata \
wget \
dovecot \
dovecot-dev \
dovecot-lmtpd \
dovecot-lua \
dovecot-ldap \
dovecot-mysql \
dovecot-sql \
dovecot-submissiond \
dovecot-pigeonhole-plugin \
dovecot-pop3d \
dovecot-fts-solr \
dovecot-fts-flatcurve \
&& arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$arch" \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true
COPY trim_logs.sh /usr/local/bin/trim_logs.sh
COPY clean_q_aged.sh /usr/local/bin/clean_q_aged.sh
COPY syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY imapsync /usr/local/bin/imapsync
COPY imapsync_runner.pl /usr/local/bin/imapsync_runner.pl
COPY report-spam.sieve /usr/lib/dovecot/sieve/report-spam.sieve
COPY report-ham.sieve /usr/lib/dovecot/sieve/report-ham.sieve
COPY rspamd-pipe-ham /usr/lib/dovecot/sieve/rspamd-pipe-ham
COPY rspamd-pipe-spam /usr/lib/dovecot/sieve/rspamd-pipe-spam
COPY sa-rules.sh /usr/local/bin/sa-rules.sh
COPY maildir_gc.sh /usr/local/bin/maildir_gc.sh
COPY docker-entrypoint.sh /
COPY supervisord.conf /etc/supervisor/supervisord.conf
COPY stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY quarantine_notify.py /usr/local/bin/quarantine_notify.py
COPY quota_notify.py /usr/local/bin/quota_notify.py
COPY repl_health.sh /usr/local/bin/repl_health.sh
COPY optimize-fts.sh /usr/local/bin/optimize-fts.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

View file

@ -0,0 +1,20 @@
#!/bin/bash
source /source_env.sh
MAX_AGE=$(redis-cli --raw -h redis-mailcow GET Q_MAX_AGE)
if [[ -z ${MAX_AGE} ]]; then
echo "Max age for quarantine items not defined"
exit 1
fi
NUM_REGEXP='^[0-9]+$'
if ! [[ ${MAX_AGE} =~ ${NUM_REGEXP} ]] ; then
echo "Max age for quarantine items invalid"
exit 1
fi
TO_DELETE=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT COUNT(id) FROM quarantine WHERE created < NOW() - INTERVAL ${MAX_AGE//[!0-9]/} DAY" -BN)
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "DELETE FROM quarantine WHERE created < NOW() - INTERVAL ${MAX_AGE//[!0-9]/} DAY"
echo "Deleted ${TO_DELETE} items from quarantine table (max age is ${MAX_AGE//[!0-9]/} days)"

View file

@ -0,0 +1,493 @@
#!/bin/bash
set -e
# Wait for MySQL to warm-up
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for database to come up..."
sleep 2
done
until dig +short mailcow.email > /dev/null; do
echo "Waiting for DNS..."
sleep 1
done
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
fi
until [[ $(${REDIS_CMDLINE} PING) == "PONG" ]]; do
echo "Waiting for Redis..."
sleep 2
done
${REDIS_CMDLINE} SET DOVECOT_REPL_HEALTH 1 > /dev/null
# Create missing directories
[[ ! -d /etc/dovecot/sql/ ]] && mkdir -p /etc/dovecot/sql/
[[ ! -d /etc/dovecot/lua/ ]] && mkdir -p /etc/dovecot/lua/
[[ ! -d /etc/dovecot/conf.d/ ]] && mkdir -p /etc/dovecot/conf.d/
[[ ! -d /var/vmail/_garbage ]] && mkdir -p /var/vmail/_garbage
[[ ! -d /var/vmail/sieve ]] && mkdir -p /var/vmail/sieve
[[ ! -d /etc/sogo ]] && mkdir -p /etc/sogo
[[ ! -d /var/volatile ]] && mkdir -p /var/volatile
# Set Dovecot sql config parameters, escape " in db password
DBPASS=$(echo ${DBPASS} | sed 's/"/\\"/g')
# Create quota dict for Dovecot
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
QUOTA_TABLE=quota2
else
QUOTA_TABLE=quota2replica
fi
cat <<EOF > /etc/dovecot/sql/dovecot-dict-sql-quota.conf
# Autogenerated by mailcow
connect = "host=/var/run/mysqld/mysqld.sock dbname=${DBNAME} user=${DBUSER} password=${DBPASS}"
map {
pattern = priv/quota/storage
table = ${QUOTA_TABLE}
username_field = username
value_field = bytes
}
map {
pattern = priv/quota/messages
table = ${QUOTA_TABLE}
username_field = username
value_field = messages
}
EOF
# Create dict used for sieve pre and postfilters
cat <<EOF > /etc/dovecot/sql/dovecot-dict-sql-sieve_before.conf
# Autogenerated by mailcow
connect = "host=/var/run/mysqld/mysqld.sock dbname=${DBNAME} user=${DBUSER} password=${DBPASS}"
map {
pattern = priv/sieve/name/\$script_name
table = sieve_before
username_field = username
value_field = id
fields {
script_name = \$script_name
}
}
map {
pattern = priv/sieve/data/\$id
table = sieve_before
username_field = username
value_field = script_data
fields {
id = \$id
}
}
EOF
cat <<EOF > /etc/dovecot/sql/dovecot-dict-sql-sieve_after.conf
# Autogenerated by mailcow
connect = "host=/var/run/mysqld/mysqld.sock dbname=${DBNAME} user=${DBUSER} password=${DBPASS}"
map {
pattern = priv/sieve/name/\$script_name
table = sieve_after
username_field = username
value_field = id
fields {
script_name = \$script_name
}
}
map {
pattern = priv/sieve/data/\$id
table = sieve_after
username_field = username
value_field = script_data
fields {
id = \$id
}
}
EOF
echo -n ${ACL_ANYONE} > /etc/dovecot/acl_anyone
if [[ "${FLATCURVE_EXPERIMENTAL}" =~ ^([yY][eE][sS]|[yY]) ]]; then
echo -e "\e[33mActivating Flatcurve as FTS Backend...\e[0m"
echo -e "\e[33mDepending on your previous setup a full reindex might be needed... \e[0m"
echo -e "\e[34mVisit https://docs.mailcow.email/manual-guides/Dovecot/u_e-dovecot-fts/#fts-related-dovecot-commands to learn how to reindex\e[0m"
echo -n 'quota acl zlib mail_crypt mail_crypt_acl mail_log notify fts fts_flatcurve listescape replication' > /etc/dovecot/mail_plugins
echo -n 'quota imap_quota imap_acl acl zlib imap_zlib imap_sieve mail_crypt mail_crypt_acl notify mail_log fts fts_flatcurve listescape replication' > /etc/dovecot/mail_plugins_imap
echo -n 'quota sieve acl zlib mail_crypt mail_crypt_acl fts fts_flatcurve notify listescape replication' > /etc/dovecot/mail_plugins_lmtp
elif [[ "${SKIP_SOLR}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo -n 'quota acl zlib mail_crypt mail_crypt_acl mail_log notify listescape replication' > /etc/dovecot/mail_plugins
echo -n 'quota imap_quota imap_acl acl zlib imap_zlib imap_sieve mail_crypt mail_crypt_acl notify listescape replication mail_log' > /etc/dovecot/mail_plugins_imap
echo -n 'quota sieve acl zlib mail_crypt mail_crypt_acl notify listescape replication' > /etc/dovecot/mail_plugins_lmtp
else
echo -n 'quota acl zlib mail_crypt mail_crypt_acl mail_log notify fts fts_solr listescape replication' > /etc/dovecot/mail_plugins
echo -n 'quota imap_quota imap_acl acl zlib imap_zlib imap_sieve mail_crypt mail_crypt_acl notify mail_log fts fts_solr listescape replication' > /etc/dovecot/mail_plugins_imap
echo -n 'quota sieve acl zlib mail_crypt mail_crypt_acl fts fts_solr notify listescape replication' > /etc/dovecot/mail_plugins_lmtp
fi
chmod 644 /etc/dovecot/mail_plugins /etc/dovecot/mail_plugins_imap /etc/dovecot/mail_plugins_lmtp /templates/quarantine.tpl
cat <<EOF > /etc/dovecot/sql/dovecot-dict-sql-userdb.conf
# Autogenerated by mailcow
driver = mysql
connect = "host=/var/run/mysqld/mysqld.sock dbname=${DBNAME} user=${DBUSER} password=${DBPASS}"
user_query = SELECT CONCAT(JSON_UNQUOTE(JSON_VALUE(attributes, '$.mailbox_format')), mailbox_path_prefix, '%d/%n/${MAILDIR_SUB}:VOLATILEDIR=/var/volatile/%u:INDEX=/var/vmail_index/%u') AS mail, '%s' AS protocol, 5000 AS uid, 5000 AS gid, concat('*:bytes=', quota) AS quota_rule FROM mailbox WHERE username = '%u' AND (active = '1' OR active = '2')
iterate_query = SELECT username FROM mailbox WHERE active = '1' OR active = '2';
EOF
cat <<EOF > /etc/dovecot/lua/passwd-verify.lua
function auth_password_verify(req, pass)
if req.domain == nil then
return dovecot.auth.PASSDB_RESULT_USER_UNKNOWN, "No such user"
end
if cur == nil then
script_init()
end
if req.user == nil then
req.user = ''
end
respbody = {}
-- check against mailbox passwds
local cur,errorString = con:execute(string.format([[SELECT password FROM mailbox
WHERE username = '%s'
AND active = '1'
AND domain IN (SELECT domain FROM domain WHERE domain='%s' AND active='1')
AND IFNULL(JSON_UNQUOTE(JSON_VALUE(mailbox.attributes, '$.force_pw_update')), 0) != '1'
AND IFNULL(JSON_UNQUOTE(JSON_VALUE(attributes, '$.%s_access')), 1) = '1']], con:escape(req.user), con:escape(req.domain), con:escape(req.service)))
local row = cur:fetch ({}, "a")
while row do
if req.password_verify(req, row.password, pass) == 1 then
con:execute(string.format([[REPLACE INTO sasl_log (service, app_password, username, real_rip)
VALUES ("%s", 0, "%s", "%s")]], con:escape(req.service), con:escape(req.user), con:escape(req.real_rip)))
cur:close()
con:close()
return dovecot.auth.PASSDB_RESULT_OK, ""
end
row = cur:fetch (row, "a")
end
-- check against app passwds for imap and smtp
-- app passwords are only available for imap, smtp, sieve and pop3 when using sasl
if req.service == "smtp" or req.service == "imap" or req.service == "sieve" or req.service == "pop3" then
local cur,errorString = con:execute(string.format([[SELECT app_passwd.id, %s_access AS has_prot_access, app_passwd.password FROM app_passwd
INNER JOIN mailbox ON mailbox.username = app_passwd.mailbox
WHERE mailbox = '%s'
AND app_passwd.active = '1'
AND mailbox.active = '1'
AND app_passwd.domain IN (SELECT domain FROM domain WHERE domain='%s' AND active='1')]], con:escape(req.service), con:escape(req.user), con:escape(req.domain)))
local row = cur:fetch ({}, "a")
while row do
if req.password_verify(req, row.password, pass) == 1 then
-- if password is valid and protocol access is 1 OR real_rip matches SOGo, proceed
if tostring(req.real_rip) == "__IPV4_SOGO__" then
cur:close()
con:close()
return dovecot.auth.PASSDB_RESULT_OK, ""
elseif row.has_prot_access == "1" then
con:execute(string.format([[REPLACE INTO sasl_log (service, app_password, username, real_rip)
VALUES ("%s", %d, "%s", "%s")]], con:escape(req.service), row.id, con:escape(req.user), con:escape(req.real_rip)))
cur:close()
con:close()
return dovecot.auth.PASSDB_RESULT_OK, ""
end
end
row = cur:fetch (row, "a")
end
end
cur:close()
con:close()
return dovecot.auth.PASSDB_RESULT_PASSWORD_MISMATCH, "Failed to authenticate"
-- PoC
-- local reqbody = string.format([[{
-- "success":0,
-- "service":"%s",
-- "app_password":false,
-- "username":"%s",
-- "real_rip":"%s"
-- }]], con:escape(req.service), con:escape(req.user), con:escape(req.real_rip))
-- http.request {
-- method = "POST",
-- url = "http://nginx:8081/sasl_log.php",
-- source = ltn12.source.string(reqbody),
-- headers = {
-- ["content-type"] = "application/json",
-- ["content-length"] = tostring(#reqbody)
-- },
-- sink = ltn12.sink.table(respbody)
-- }
end
function auth_passdb_lookup(req)
return dovecot.auth.PASSDB_RESULT_USER_UNKNOWN, ""
end
function script_init()
mysql = require "luasql.mysql"
http = require "socket.http"
http.TIMEOUT = 5
ltn12 = require "ltn12"
env = mysql.mysql()
con = env:connect("__DBNAME__","__DBUSER__","__DBPASS__","localhost")
return 0
end
function script_deinit()
con:close()
env:close()
end
EOF
# Temporarily set FTS depending on user choice inside mailcow.conf. Will be removed as soon as Solr is dropped
if [[ "${FLATCURVE_EXPERIMENTAL}" =~ ^([yY][eE][sS]|[yY])$ ]]; then
cat <<EOF > /etc/dovecot/conf.d/fts.conf
# Autogenerated by mailcow
plugin {
fts_autoindex = yes
fts_autoindex_exclude = \Junk
fts_autoindex_exclude2 = \Trash
fts = flatcurve
# Maximum term length can be set via the 'maxlen' argument (maxlen is
# specified in bytes, not number of UTF-8 characters)
fts_tokenizer_email_address = maxlen=100
fts_tokenizer_generic = algorithm=simple maxlen=30
# These are not flatcurve settings, but required for Dovecot FTS. See
# Dovecot FTS Configuration link above for further information.
fts_languages = en es de
fts_tokenizers = generic email-address
# OPTIONAL: Recommended default FTS core configuration
fts_filters = normalizer-icu snowball stopwords
fts_filters_en = lowercase snowball english-possessive stopwords
}
EOF
elif [[ ! "${SKIP_SOLR}" =~ ^([yY][eE][sS]|[yY])$ ]]; then
cat <<EOF > /etc/dovecot/conf.d/fts.conf
# Autogenerated by mailcow
plugin {
fts = solr
fts_autoindex = yes
fts_autoindex_exclude = \Junk
fts_autoindex_exclude2 = \Trash
fts_solr = url=http://solr:8983/solr/dovecot-fts/
fts_tokenizers = generic email-address
fts_tokenizer_generic = algorithm=simple
fts_filters = normalizer-icu snowball stopwords
fts_filters_en = lowercase snowball english-possessive stopwords
}
EOF
fi
# Replace patterns in app-passdb.lua
sed -i "s/__DBUSER__/${DBUSER}/g" /etc/dovecot/lua/passwd-verify.lua
sed -i "s/__DBPASS__/${DBPASS}/g" /etc/dovecot/lua/passwd-verify.lua
sed -i "s/__DBNAME__/${DBNAME}/g" /etc/dovecot/lua/passwd-verify.lua
sed -i "s/__IPV4_SOGO__/${IPV4_NETWORK}.248/g" /etc/dovecot/lua/passwd-verify.lua
# Migrate old sieve_after file
[[ -f /etc/dovecot/sieve_after ]] && mv /etc/dovecot/sieve_after /etc/dovecot/global_sieve_after
# Create global sieve scripts
cat /etc/dovecot/global_sieve_after > /var/vmail/sieve/global_sieve_after.sieve
cat /etc/dovecot/global_sieve_before > /var/vmail/sieve/global_sieve_before.sieve
# Check permissions of vmail/index/garbage directories.
# Do not do this every start-up, it may take a very long time. So we use a stat check here.
if [[ $(stat -c %U /var/vmail/) != "vmail" ]] ; then chown -R vmail:vmail /var/vmail ; fi
if [[ $(stat -c %U /var/vmail/_garbage) != "vmail" ]] ; then chown -R vmail:vmail /var/vmail/_garbage ; fi
if [[ $(stat -c %U /var/vmail_index) != "vmail" ]] ; then chown -R vmail:vmail /var/vmail_index ; fi
# Cleanup random user maildirs
rm -rf /var/vmail/mailcow.local/*
# Cleanup PIDs
[[ -f /tmp/quarantine_notify.pid ]] && rm /tmp/quarantine_notify.pid
# create sni configuration
echo "" > /etc/dovecot/sni.conf
for cert_dir in /etc/ssl/mail/*/ ; do
if [[ ! -f ${cert_dir}domains ]] || [[ ! -f ${cert_dir}cert.pem ]] || [[ ! -f ${cert_dir}key.pem ]]; then
continue
fi
domains=($(cat ${cert_dir}domains))
for domain in ${domains[@]}; do
echo 'local_name '${domain}' {' >> /etc/dovecot/sni.conf;
echo ' ssl_cert = <'${cert_dir}'cert.pem' >> /etc/dovecot/sni.conf;
echo ' ssl_key = <'${cert_dir}'key.pem' >> /etc/dovecot/sni.conf;
echo '}' >> /etc/dovecot/sni.conf;
done
done
# Create random master for SOGo sieve features
RAND_USER=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 16 | head -n 1)
RAND_PASS=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 24 | head -n 1)
if [[ ! -z ${DOVECOT_MASTER_USER} ]] && [[ ! -z ${DOVECOT_MASTER_PASS} ]]; then
RAND_USER=${DOVECOT_MASTER_USER}
RAND_PASS=${DOVECOT_MASTER_PASS}
fi
echo ${RAND_USER}@mailcow.local:{SHA1}$(echo -n ${RAND_PASS} | sha1sum | awk '{print $1}'):::::: > /etc/dovecot/dovecot-master.passwd
echo ${RAND_USER}@mailcow.local::5000:5000:::: > /etc/dovecot/dovecot-master.userdb
echo ${RAND_USER}@mailcow.local:${RAND_PASS} > /etc/sogo/sieve.creds
if [[ -z ${MAILDIR_SUB} ]]; then
MAILDIR_SUB_SHARED=
else
MAILDIR_SUB_SHARED=/${MAILDIR_SUB}
fi
cat <<EOF > /etc/dovecot/shared_namespace.conf
# Autogenerated by mailcow
namespace {
type = shared
separator = /
prefix = Shared/%%u/
location = maildir:%%h${MAILDIR_SUB_SHARED}:INDEX=~${MAILDIR_SUB_SHARED}/Shared/%%u
subscriptions = no
list = children
}
EOF
cat <<EOF > /etc/dovecot/sogo_trusted_ip.conf
# Autogenerated by mailcow
remote ${IPV4_NETWORK}.248 {
disable_plaintext_auth = no
}
EOF
# Create random master Password for SOGo SSO
RAND_PASS=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 32 | head -n 1)
echo -n ${RAND_PASS} > /etc/phpfpm/sogo-sso.pass
cat <<EOF > /etc/dovecot/sogo-sso.conf
# Autogenerated by mailcow
passdb {
driver = static
args = allow_real_nets=${IPV4_NETWORK}.248/32 password={plain}${RAND_PASS}
}
EOF
if [[ "${MASTER}" =~ ^([nN][oO]|[nN])+$ ]]; then
# Toggling MASTER will result in a rebuild of containers, so the quota script will be recreated
cat <<'EOF' > /usr/local/bin/quota_notify.py
#!/usr/bin/python3
import sys
sys.exit()
EOF
fi
# Set mail_replica for HA setups
if [[ -n ${MAILCOW_REPLICA_IP} && -n ${DOVEADM_REPLICA_PORT} ]]; then
cat <<EOF > /etc/dovecot/mail_replica.conf
# Autogenerated by mailcow
mail_replica = tcp:${MAILCOW_REPLICA_IP}:${DOVEADM_REPLICA_PORT}
EOF
fi
# 401 is user dovecot
if [[ ! -s /mail_crypt/ecprivkey.pem || ! -s /mail_crypt/ecpubkey.pem ]]; then
openssl ecparam -name prime256v1 -genkey | openssl pkey -out /mail_crypt/ecprivkey.pem
openssl pkey -in /mail_crypt/ecprivkey.pem -pubout -out /mail_crypt/ecpubkey.pem
chown 401 /mail_crypt/ecprivkey.pem /mail_crypt/ecpubkey.pem
else
chown 401 /mail_crypt/ecprivkey.pem /mail_crypt/ecpubkey.pem
fi
# Compile sieve scripts
sievec /var/vmail/sieve/global_sieve_before.sieve
sievec /var/vmail/sieve/global_sieve_after.sieve
sievec /usr/lib/dovecot/sieve/report-spam.sieve
sievec /usr/lib/dovecot/sieve/report-ham.sieve
# Fix permissions
chown root:root /etc/dovecot/sql/*.conf
chown root:dovecot /etc/dovecot/sql/dovecot-dict-sql-sieve* /etc/dovecot/sql/dovecot-dict-sql-quota* /etc/dovecot/lua/passwd-verify.lua
chmod 640 /etc/dovecot/sql/*.conf /etc/dovecot/lua/passwd-verify.lua
chown -R vmail:vmail /var/vmail/sieve
chown -R vmail:vmail /var/volatile
chown -R vmail:vmail /var/vmail_index
adduser vmail tty
chmod g+rw /dev/console
chown root:tty /dev/console
chmod +x /usr/lib/dovecot/sieve/rspamd-pipe-ham \
/usr/lib/dovecot/sieve/rspamd-pipe-spam \
/usr/local/bin/imapsync_runner.pl \
/usr/local/bin/imapsync \
/usr/local/bin/trim_logs.sh \
/usr/local/bin/sa-rules.sh \
/usr/local/bin/clean_q_aged.sh \
/usr/local/bin/maildir_gc.sh \
/usr/local/sbin/stop-supervisor.sh \
/usr/local/bin/quota_notify.py \
/usr/local/bin/repl_health.sh \
/usr/local/bin/optimize-fts.sh
# Prepare environment file for cronjobs
printenv | sed 's/^\(.*\)$/export \1/g' > /source_env.sh
# Clean old PID if any
[[ -f /var/run/dovecot/master.pid ]] && rm /var/run/dovecot/master.pid
# Clean stopped imapsync jobs
rm -f /tmp/imapsync_busy.lock
IMAPSYNC_TABLE=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SHOW TABLES LIKE 'imapsync'" -Bs)
[[ ! -z ${IMAPSYNC_TABLE} ]] && mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "UPDATE imapsync SET is_running='0'"
# Envsubst maildir_gc
echo "$(envsubst < /usr/local/bin/maildir_gc.sh)" > /usr/local/bin/maildir_gc.sh
# GUID generation
while [[ ${VERSIONS_OK} != 'OK' ]]; do
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = \"${DBNAME}\" AND TABLE_NAME = 'versions'") ]]; then
VERSIONS_OK=OK
else
echo "Waiting for versions table to be created..."
sleep 3
fi
done
PUBKEY_MCRYPT=$(doveconf -P 2> /dev/null | grep -i mail_crypt_global_public_key | cut -d '<' -f2)
if [ -f ${PUBKEY_MCRYPT} ]; then
GUID=$(cat <(echo ${MAILCOW_HOSTNAME}) /mail_crypt/ecpubkey.pem | sha256sum | cut -d ' ' -f1 | tr -cd "[a-fA-F0-9.:/] ")
if [ ${#GUID} -eq 64 ]; then
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
REPLACE INTO versions (application, version) VALUES ("GUID", "${GUID}");
EOF
else
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
REPLACE INTO versions (application, version) VALUES ("GUID", "INVALID");
EOF
fi
fi
# Collect SA rules once now
/usr/local/bin/sa-rules.sh
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
# For some strange, unknown and stupid reason, Dovecot may run into a race condition, when this file is not touched before it is read by dovecot/auth
# May be related to something inside Docker, I seriously don't know
touch /etc/dovecot/lua/passwd-verify.lua
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
cp /etc/syslog-ng/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng.conf
fi
exec "$@"

20539
data/Dockerfiles/dovecot/imapsync Executable file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,196 @@
#!/usr/bin/perl
use DBI;
use LockFile::Simple qw(lock trylock unlock);
use Proc::ProcessTable;
use Data::Dumper qw(Dumper);
use IPC::Run 'run';
use File::Temp;
use Try::Tiny;
use sigtrap 'handler' => \&sig_handler, qw(INT TERM KILL QUIT);
sub trim { my $s = shift; $s =~ s/^\s+|\s+$//g; return $s };
my $t = Proc::ProcessTable->new;
my $imapsync_running = grep { $_->{cmndline} =~ /imapsync\s/i } @{$t->table};
if ($imapsync_running ge 1)
{
print "imapsync is active, exiting...";
exit;
}
sub qqw($) {
my @params = ();
my @values = split(/(?=--)/, $_[0]);
foreach my $val (@values) {
my @tmpparam = split(/ /, $val, 2);
foreach my $tmpval (@tmpparam) {
if ($tmpval ne '') {
push @params, $tmpval;
}
}
}
foreach my $val (@params) {
$val=trim($val);
}
return @params;
}
$run_dir="/tmp";
$dsn = 'DBI:mysql:database=' . $ENV{'DBNAME'} . ';mysql_socket=/var/run/mysqld/mysqld.sock';
$lock_file = $run_dir . "/imapsync_busy";
$lockmgr = LockFile::Simple->make(-autoclean => 1, -max => 1);
$lockmgr->lock($lock_file) || die "can't lock ${lock_file}";
$dbh = DBI->connect($dsn, $ENV{'DBUSER'}, $ENV{'DBPASS'}, {
mysql_auto_reconnect => 1,
mysql_enable_utf8mb4 => 1
});
$dbh->do("UPDATE imapsync SET is_running = 0");
sub sig_handler {
# Send die to force exception in "run"
die "sig_handler received signal, preparing to exit...\n";
};
open my $file, '<', "/etc/sogo/sieve.creds";
my $creds = <$file>;
close $file;
my ($master_user, $master_pass) = split /:/, $creds;
my $sth = $dbh->prepare("SELECT id,
user1,
user2,
host1,
authmech1,
password1,
exclude,
port1,
enc1,
delete2duplicates,
maxage,
subfolder2,
delete1,
delete2,
automap,
skipcrossduplicates,
maxbytespersecond,
custom_params,
subscribeall,
timeout1,
timeout2,
dry
FROM imapsync
WHERE active = 1
AND is_running = 0
AND (
UNIX_TIMESTAMP(NOW()) - UNIX_TIMESTAMP(last_run) > mins_interval * 60
OR
last_run IS NULL)
ORDER BY last_run");
$sth->execute();
my $row;
while ($row = $sth->fetchrow_arrayref()) {
$id = @$row[0];
$user1 = @$row[1];
$user2 = @$row[2];
$host1 = @$row[3];
$authmech1 = @$row[4];
$password1 = @$row[5];
$exclude = @$row[6];
$port1 = @$row[7];
$enc1 = @$row[8];
$delete2duplicates = @$row[9];
$maxage = @$row[10];
$subfolder2 = @$row[11];
$delete1 = @$row[12];
$delete2 = @$row[13];
$automap = @$row[14];
$skipcrossduplicates = @$row[15];
$maxbytespersecond = @$row[16];
$custom_params = @$row[17];
$subscribeall = @$row[18];
$timeout1 = @$row[19];
$timeout2 = @$row[20];
$dry = @$row[21];
if ($enc1 eq "TLS") { $enc1 = "--tls1"; } elsif ($enc1 eq "SSL") { $enc1 = "--ssl1"; } else { undef $enc1; }
my $template = $run_dir . '/imapsync.XXXXXXX';
my $passfile1 = File::Temp->new(TEMPLATE => $template);
my $passfile2 = File::Temp->new(TEMPLATE => $template);
binmode( $passfile1, ":utf8" );
print $passfile1 "$password1\n";
print $passfile2 trim($master_pass) . "\n";
my @custom_params_a = qqw($custom_params);
my $custom_params_ref = \@custom_params_a;
my $generated_cmds = [ "/usr/local/bin/imapsync",
"--tmpdir", "/tmp",
"--nofoldersizes",
"--addheader",
($timeout1 gt "0" ? () : ('--timeout1', $timeout1)),
($timeout2 gt "0" ? () : ('--timeout2', $timeout2)),
($exclude eq "" ? () : ("--exclude", $exclude)),
($subfolder2 eq "" ? () : ('--subfolder2', $subfolder2)),
($maxage eq "0" ? () : ('--maxage', $maxage)),
($maxbytespersecond eq "0" ? () : ('--maxbytespersecond', $maxbytespersecond)),
($delete2duplicates ne "1" ? () : ('--delete2duplicates')),
($subscribeall ne "1" ? () : ('--subscribeall')),
($delete1 ne "1" ? () : ('--delete')),
($delete2 ne "1" ? () : ('--delete2')),
($automap ne "1" ? () : ('--automap')),
($skipcrossduplicates ne "1" ? () : ('--skipcrossduplicates')),
(!defined($enc1) ? () : ($enc1)),
"--host1", $host1,
"--user1", $user1,
"--passfile1", $passfile1->filename,
"--port1", $port1,
"--host2", "localhost",
"--user2", $user2 . '*' . trim($master_user),
"--passfile2", $passfile2->filename,
($dry eq "1" ? ('--dry') : ()),
'--no-modulesversion',
'--noreleasecheck'];
try {
$is_running = $dbh->prepare("UPDATE imapsync SET is_running = 1, success = NULL, exit_status = NULL WHERE id = ?");
$is_running->bind_param( 1, ${id} );
$is_running->execute();
run [@$generated_cmds, @$custom_params_ref], '&>', \my $stdout;
# check exit code and status
($exit_code, $exit_status) = ($stdout =~ m/Exiting\swith\sreturn\svalue\s(\d+)\s\(([^:)]+)/);
$success = 0;
if (defined $exit_code && $exit_code == 0) {
$success = 1;
}
$update = $dbh->prepare("UPDATE imapsync SET returned_text = ?, success = ?, exit_status = ? WHERE id = ?");
$update->bind_param( 1, ${stdout} );
$update->bind_param( 2, ${success} );
$update->bind_param( 3, ${exit_status} );
$update->bind_param( 4, ${id} );
$update->execute();
} catch {
$update = $dbh->prepare("UPDATE imapsync SET returned_text = 'Could not start or finish imapsync', success = 0 WHERE id = ?");
$update->bind_param( 1, ${id} );
$update->execute();
} finally {
$update = $dbh->prepare("UPDATE imapsync SET last_run = NOW(), is_running = 0 WHERE id = ?");
$update->bind_param( 1, ${id} );
$update->execute();
};
}
$sth->finish();
$dbh->disconnect();
$lockmgr->unlock($lock_file);

View file

@ -0,0 +1,2 @@
#!/bin/bash
[ -d /var/vmail/_garbage/ ] && /usr/bin/find /var/vmail/_garbage/ -mindepth 1 -maxdepth 1 -type d -cmin +${MAILDIR_GC_TIME} -exec rm -r {} \;

View file

@ -0,0 +1,7 @@
#!/bin/bash
if [[ "${SKIP_SOLR}" =~ ^([yY][eE][sS]|[yY])+$ && ! "${FLATCURVE_EXPERIMENTAL}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
exit 0
else
doveadm fts optimize -A
fi

View file

@ -0,0 +1,168 @@
#!/usr/bin/python3
import smtplib
import os
import sys
import MySQLdb
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.utils import COMMASPACE, formatdate
import jinja2
from jinja2 import Template
import json
import redis
import time
import html2text
import socket
pid = str(os.getpid())
pidfile = "/tmp/quarantine_notify.pid"
if os.path.isfile(pidfile):
print("%s already exists, exiting" % (pidfile))
sys.exit()
pid = str(os.getpid())
f = open(pidfile, 'w')
f.write(pid)
f.close()
try:
while True:
try:
r = redis.StrictRedis(host='redis', decode_responses=True, port=6379, db=0)
r.ping()
except Exception as ex:
print('%s - trying again...' % (ex))
time.sleep(3)
else:
break
time_now = int(time.time())
mailcow_hostname = os.environ.get('MAILCOW_HOSTNAME')
max_score = float(r.get('Q_MAX_SCORE') or "9999.0")
if max_score == "":
max_score = 9999.0
def query_mysql(query, headers = True, update = False):
while True:
try:
cnx = MySQLdb.connect(user=os.environ.get('DBUSER'), password=os.environ.get('DBPASS'), database=os.environ.get('DBNAME'), charset="utf8mb4", collation="utf8mb4_general_ci")
except Exception as ex:
print('%s - trying again...' % (ex))
time.sleep(3)
else:
break
cur = cnx.cursor()
cur.execute(query)
if not update:
result = []
columns = tuple( [d[0] for d in cur.description] )
for row in cur:
if headers:
result.append(dict(list(zip(columns, row))))
else:
result.append(row)
cur.close()
cnx.close()
return result
else:
cnx.commit()
cur.close()
cnx.close()
def notify_rcpt(rcpt, msg_count, quarantine_acl, category):
if category == "add_header": category = "add header"
meta_query = query_mysql('SELECT SHA2(CONCAT(id, qid), 256) AS qhash, id, subject, score, sender, created, action FROM quarantine WHERE notified = 0 AND rcpt = "%s" AND score < %f AND (action = "%s" OR "all" = "%s")' % (rcpt, max_score, category, category))
print("%s: %d of %d messages qualify for notification" % (rcpt, len(meta_query), msg_count))
if len(meta_query) == 0:
return
msg_count = len(meta_query)
if r.get('Q_HTML'):
try:
template = Template(r.get('Q_HTML'))
except:
print("Error: Cannot parse quarantine template, falling back to default template.")
with open('/templates/quarantine.tpl') as file_:
template = Template(file_.read())
else:
with open('/templates/quarantine.tpl') as file_:
template = Template(file_.read())
html = template.render(meta=meta_query, username=rcpt, counter=msg_count, hostname=mailcow_hostname, quarantine_acl=quarantine_acl)
text = html2text.html2text(html)
count = 0
while count < 15:
count += 1
try:
server = smtplib.SMTP('postfix', 590, 'quarantine')
server.ehlo()
msg = MIMEMultipart('alternative')
msg_from = r.get('Q_SENDER') or "quarantine@localhost"
# Remove non-ascii chars from field
msg['From'] = ''.join([i if ord(i) < 128 else '' for i in msg_from])
msg['Subject'] = r.get('Q_SUBJ') or "Spam Quarantine Notification"
msg['Date'] = formatdate(localtime = True)
text_part = MIMEText(text, 'plain', 'utf-8')
html_part = MIMEText(html, 'html', 'utf-8')
msg.attach(text_part)
msg.attach(html_part)
msg['To'] = str(rcpt)
bcc = r.get('Q_BCC') or ""
redirect = r.get('Q_REDIRECT') or ""
text = msg.as_string()
if bcc == '':
if redirect == '':
server.sendmail(msg['From'], str(rcpt), text)
else:
server.sendmail(msg['From'], str(redirect), text)
else:
if redirect == '':
server.sendmail(msg['From'], [str(rcpt)] + [str(bcc)], text)
else:
server.sendmail(msg['From'], [str(redirect)] + [str(bcc)], text)
server.quit()
for res in meta_query:
query_mysql('UPDATE quarantine SET notified = 1 WHERE id = "%d"' % (res['id']), update = True)
r.hset('Q_LAST_NOTIFIED', record['rcpt'], time_now)
break
except Exception as ex:
server.quit()
print('%s' % (ex))
time.sleep(3)
records = query_mysql('SELECT IFNULL(user_acl.quarantine, 0) AS quarantine_acl, count(id) AS counter, rcpt FROM quarantine LEFT OUTER JOIN user_acl ON user_acl.username = rcpt WHERE notified = 0 AND score < %f AND rcpt in (SELECT username FROM mailbox) GROUP BY rcpt' % (max_score))
for record in records:
attrs = ''
attrs_json = ''
time_trans = {
"hourly": 3600,
"daily": 86400,
"weekly": 604800
}
try:
last_notification = int(r.hget('Q_LAST_NOTIFIED', record['rcpt']))
if last_notification > time_now:
print('Last notification is > time now, assuming never')
last_notification = 0
except Exception as ex:
print('Could not determine last notification for %s, assuming never' % (record['rcpt']))
last_notification = 0
attrs_json = query_mysql('SELECT attributes FROM mailbox WHERE username = "%s"' % (record['rcpt']))
attrs = attrs_json[0]['attributes']
if isinstance(attrs, str):
# if attr is str then just load it
attrs = json.loads(attrs)
else:
# if it's bytes then decode and load it
attrs = json.loads(attrs.decode('utf-8'))
if attrs['quarantine_notification'] not in ('hourly', 'daily', 'weekly'):
continue
if last_notification == 0 or (last_notification + time_trans[attrs['quarantine_notification']]) <= time_now:
print("Notifying %s: Considering %d new items in quarantine (policy: %s)" % (record['rcpt'], record['counter'], attrs['quarantine_notification']))
notify_rcpt(record['rcpt'], record['counter'], record['quarantine_acl'], attrs['quarantine_category'])
finally:
os.unlink(pidfile)

View file

@ -0,0 +1,94 @@
#!/usr/bin/python3
import smtplib
import os
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.utils import COMMASPACE, formatdate
import jinja2
from jinja2 import Template
import redis
import time
import json
import sys
import html2text
from subprocess import Popen, PIPE, STDOUT
if len(sys.argv) > 2:
percent = int(sys.argv[1])
username = str(sys.argv[2])
else:
print("Args missing")
sys.exit(1)
while True:
try:
r = redis.StrictRedis(host='redis', decode_responses=True, port=6379, db=0)
r.ping()
except Exception as ex:
print('%s - trying again...' % (ex))
time.sleep(3)
else:
break
if r.get('QW_HTML'):
try:
template = Template(r.get('QW_HTML'))
except:
print("Error: Cannot parse quarantine template, falling back to default template.")
with open('/templates/quota.tpl') as file_:
template = Template(file_.read())
else:
with open('/templates/quota.tpl') as file_:
template = Template(file_.read())
html = template.render(username=username, percent=percent)
text = html2text.html2text(html)
try:
msg = MIMEMultipart('alternative')
msg['From'] = r.get('QW_SENDER') or "quota-warning@localhost"
msg['Subject'] = r.get('QW_SUBJ') or "Quota warning"
msg['Date'] = formatdate(localtime = True)
text_part = MIMEText(text, 'plain', 'utf-8')
html_part = MIMEText(html, 'html', 'utf-8')
msg.attach(text_part)
msg.attach(html_part)
msg['To'] = username
p = Popen(['/usr/libexec/dovecot/dovecot-lda', '-d', username, '-o', '"plugin/quota=maildir:User quota:noenforcing"'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
p.communicate(input=bytes(msg.as_string(), 'utf-8'))
domain = username.split("@")[-1]
if domain and r.hget('QW_BCC', domain):
bcc_data = json.loads(r.hget('QW_BCC', domain))
bcc_rcpts = bcc_data['bcc_rcpts']
if bcc_data['active'] == 1:
for rcpt in bcc_rcpts:
msg = MIMEMultipart('alternative')
msg['From'] = username
subject = r.get('QW_SUBJ') or "Quota warning"
msg['Subject'] = subject + ' (' + username + ')'
msg['Date'] = formatdate(localtime = True)
text_part = MIMEText(text, 'plain', 'utf-8')
html_part = MIMEText(html, 'html', 'utf-8')
msg.attach(text_part)
msg.attach(html_part)
msg['To'] = rcpt
server = smtplib.SMTP('postfix', 588, 'quotanotification')
server.ehlo()
server.sendmail(msg['From'], str(rcpt), msg.as_string())
server.quit()
except Exception as ex:
print('Failed to send quota notification: %s' % (ex))
sys.exit(1)
try:
sys.stdout.close()
except:
pass
try:
sys.stderr.close()
except:
pass

View file

@ -0,0 +1,28 @@
#!/bin/bash
source /source_env.sh
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
fi
# Is replication active?
# grep on file is less expensive than doveconf
if [ -n ${MAILCOW_REPLICA_IP} ]; then
${REDIS_CMDLINE} SET DOVECOT_REPL_HEALTH 1 > /dev/null
exit
fi
FAILED_SYNCS=$(doveadm replicator status | grep "Waiting 'failed' requests" | grep -oE '[0-9]+')
# Set amount of failed jobs as DOVECOT_REPL_HEALTH
# 1 failed job for mailcow.local is expected and healthy
if [[ "${FAILED_SYNCS}" != 0 ]] && [[ "${FAILED_SYNCS}" != 1 ]]; then
printf "Dovecot replicator has %d failed jobs\n" "${FAILED_SYNCS}"
${REDIS_CMDLINE} SET DOVECOT_REPL_HEALTH "${FAILED_SYNCS}" > /dev/null
else
${REDIS_CMDLINE} SET DOVECOT_REPL_HEALTH 1 > /dev/null
fi

View file

@ -0,0 +1,11 @@
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];
if environment :matches "imap.mailbox" "*" {
set "mailbox" "${1}";
}
if string "${mailbox}" "Trash" {
stop;
}
pipe :copy "rspamd-pipe-ham";

View file

@ -0,0 +1,3 @@
require ["vnd.dovecot.pipe", "copy"];
pipe :copy "rspamd-pipe-spam";

View file

@ -0,0 +1,10 @@
#!/bin/bash
FILE=/tmp/mail$$
cat > $FILE
trap "/bin/rm -f $FILE" 0 1 2 3 13 15
cat ${FILE} | /usr/bin/curl -H "Flag: 11" -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd.${COMPOSE_PROJECT_NAME}_mailcow-network/fuzzydel
cat ${FILE} | /usr/bin/curl -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd.${COMPOSE_PROJECT_NAME}_mailcow-network/learnham
cat ${FILE} | /usr/bin/curl -H "Flag: 13" -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd.${COMPOSE_PROJECT_NAME}_mailcow-network/fuzzyadd
exit 0

View file

@ -0,0 +1,10 @@
#!/bin/bash
FILE=/tmp/mail$$
cat > $FILE
trap "/bin/rm -f $FILE" 0 1 2 3 13 15
cat ${FILE} | /usr/bin/curl -H "Flag: 13" -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd.${COMPOSE_PROJECT_NAME}_mailcow-network/fuzzydel
cat ${FILE} | /usr/bin/curl -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd.${COMPOSE_PROJECT_NAME}_mailcow-network/learnspam
cat ${FILE} | /usr/bin/curl -H "Flag: 11" -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd.${COMPOSE_PROJECT_NAME}_mailcow-network/fuzzyadd
exit 0

View file

@ -0,0 +1,37 @@
#!/bin/bash
# Create temp directories
[[ ! -d /tmp/sa-rules-heinlein ]] && mkdir -p /tmp/sa-rules-heinlein
# Hash current SA rules
if [[ ! -f /etc/rspamd/custom/sa-rules ]]; then
HASH_SA_RULES=0
else
HASH_SA_RULES=$(cat /etc/rspamd/custom/sa-rules | md5sum | cut -d' ' -f1)
fi
# Deploy
if curl --connect-timeout 15 --retry 10 --max-time 30 https://www.spamassassin.heinlein-support.de/$(dig txt 1.4.3.spamassassin.heinlein-support.de +short | tr -d '"' | tr -dc '0-9').tar.gz --output /tmp/sa-rules-heinlein.tar.gz; then
if gzip -t /tmp/sa-rules-heinlein.tar.gz; then
tar xfvz /tmp/sa-rules-heinlein.tar.gz -C /tmp/sa-rules-heinlein
cat /tmp/sa-rules-heinlein/*cf > /etc/rspamd/custom/sa-rules
fi
else
echo "Failed to download SA rules. Exiting."
exit 0 # Must be 0 otherwise dovecot would not start at all
fi
sed -i -e 's/\([^\\]\)\$\([^\/]\)/\1\\$\2/g' /etc/rspamd/custom/sa-rules
if [[ "$(cat /etc/rspamd/custom/sa-rules | md5sum | cut -d' ' -f1)" != "${HASH_SA_RULES}" ]]; then
CONTAINER_NAME=rspamd-mailcow
CONTAINER_ID=$(curl --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/json | \
jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" | \
jq -rc "select( .name | tostring | contains(\"${CONTAINER_NAME}\")) | select( .project | tostring | contains(\"${COMPOSE_PROJECT_NAME,,}\")) | .id")
if [[ ! -z ${CONTAINER_ID} ]]; then
curl --silent --insecure -XPOST --connect-timeout 15 --max-time 120 https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${CONTAINER_ID}/restart
fi
fi
# Cleanup
rm -rf /tmp/sa-rules-heinlein /tmp/sa-rules-heinlein.tar.gz

View file

@ -0,0 +1,8 @@
#!/bin/bash
printf "READY\n";
while read line; do
echo "Processing Event: $line" >&2;
kill -3 $(cat "/var/run/supervisord.pid")
done < /dev/stdin

View file

@ -0,0 +1,24 @@
[supervisord]
nodaemon=true
user=root
pidfile=/var/run/supervisord.pid
[program:syslog-ng]
command=/usr/sbin/syslog-ng --foreground --no-caps
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
[program:dovecot]
command=/usr/sbin/dovecot -F
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=true
[eventlistener:processes]
command=/usr/local/sbin/stop-supervisor.sh
events=PROCESS_STATE_STOPPED, PROCESS_STATE_EXITED, PROCESS_STATE_FATAL

View file

@ -0,0 +1,46 @@
@version: 4.5
@include "scl.conf"
options {
chain_hostnames(off);
flush_lines(0);
use_dns(no);
use_fqdn(no);
owner("root"); group("adm"); perm(0640);
stats(freq(0));
keep_timestamp(no);
bad_hostname("^gconfd$");
};
source s_dgram {
unix-dgram("/dev/log");
internal();
};
destination d_stdout { pipe("/dev/stdout"); };
destination d_redis_ui_log {
redis(
host("`REDIS_SLAVEOF_IP`")
persist-name("redis1")
port(`REDIS_SLAVEOF_PORT`)
command("LPUSH" "DOVECOT_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
destination d_redis_f2b_channel {
redis(
host("`REDIS_SLAVEOF_IP`")
persist-name("redis2")
port(`REDIS_SLAVEOF_PORT`)
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};
filter f_mail { facility(mail); };
filter f_replica {
not match("User has no mail_replica in userdb" value("MESSAGE"));
not match("Error: sync: Unknown user in remote" value("MESSAGE"));
};
log {
source(s_dgram);
filter(f_replica);
destination(d_stdout);
filter(f_mail);
destination(d_redis_ui_log);
destination(d_redis_f2b_channel);
};

View file

@ -0,0 +1,46 @@
@version: 4.5
@include "scl.conf"
options {
chain_hostnames(off);
flush_lines(0);
use_dns(no);
use_fqdn(no);
owner("root"); group("adm"); perm(0640);
stats(freq(0));
keep_timestamp(no);
bad_hostname("^gconfd$");
};
source s_dgram {
unix-dgram("/dev/log");
internal();
};
destination d_stdout { pipe("/dev/stdout"); };
destination d_redis_ui_log {
redis(
host("redis-mailcow")
persist-name("redis1")
port(6379)
command("LPUSH" "DOVECOT_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
destination d_redis_f2b_channel {
redis(
host("redis-mailcow")
persist-name("redis2")
port(6379)
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};
filter f_mail { facility(mail); };
filter f_replica {
not match("User has no mail_replica in userdb" value("MESSAGE"));
not match("Error: sync: Unknown user in remote" value("MESSAGE"));
};
log {
source(s_dgram);
filter(f_replica);
destination(d_stdout);
filter(f_mail);
destination(d_redis_ui_log);
destination(d_redis_f2b_channel);
};

View file

@ -0,0 +1,25 @@
#!/bin/bash
catch_non_zero() {
CMD=${1}
${CMD} > /dev/null
EC=$?
if [ ${EC} -ne 0 ]; then
echo "Command ${CMD} failed to execute, exit code was ${EC}"
fi
}
source /source_env.sh
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
fi
catch_non_zero "${REDIS_CMDLINE} LTRIM ACME_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM POSTFIX_MAILLOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM DOVECOT_MAILLOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM SOGO_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM NETFILTER_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM AUTODISCOVER_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM API_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM RL_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM WATCHDOG_LOG 0 ${LOG_LINES}"

View file

@ -0,0 +1,43 @@
FROM alpine:3.20
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
WORKDIR /app
ARG PIP_BREAK_SYSTEM_PACKAGES=1
ENV XTABLES_LIBDIR /usr/lib/xtables
ENV PYTHON_IPTABLES_XTABLES_VERSION 12
ENV IPTABLES_LIBDIR /usr/lib
RUN apk add --virtual .build-deps \
gcc \
python3-dev \
libffi-dev \
openssl-dev \
&& apk add -U python3 \
iptables \
iptables-dev \
ip6tables \
xtables-addons \
nftables \
tzdata \
py3-pip \
py3-nftables \
musl-dev \
&& pip3 install --ignore-installed --upgrade pip \
jsonschema \
python-iptables \
redis \
ipaddress \
dnspython \
&& apk del .build-deps
# && pip3 install --upgrade pip python-iptables==0.13.0 redis ipaddress dnspython \
COPY modules /app/modules
COPY main.py /app/
COPY ./docker-entrypoint.sh /app/
RUN chmod +x /app/docker-entrypoint.sh
CMD ["/bin/sh", "-c", "/app/docker-entrypoint.sh"]

View file

@ -0,0 +1,29 @@
#!/bin/sh
backend=iptables
nft list table ip filter &>/dev/null
nftables_found=$?
iptables -L -n &>/dev/null
iptables_found=$?
if [ $nftables_found -lt $iptables_found ]; then
backend=nftables
fi
if [ $nftables_found -gt $iptables_found ]; then
backend=iptables
fi
if [ $nftables_found -eq 0 ] && [ $nftables_found -eq $iptables_found ]; then
nftables_lines=$(nft list ruleset | wc -l)
iptables_lines=$(iptables-save | wc -l)
if [ $nftables_lines -gt $iptables_lines ]; then
backend=nftables
else
backend=iptables
fi
fi
exec python -u /app/main.py $backend

View file

@ -0,0 +1,503 @@
#!/usr/bin/env python3
import re
import os
import sys
import time
import atexit
import signal
import ipaddress
from collections import Counter
from random import randint
from threading import Thread
from threading import Lock
import redis
import json
import dns.resolver
import dns.exception
import uuid
from modules.Logger import Logger
from modules.IPTables import IPTables
from modules.NFTables import NFTables
# globals
WHITELIST = []
BLACKLIST= []
bans = {}
quit_now = False
exit_code = 0
lock = Lock()
chain_name = "MAILCOW"
r = None
pubsub = None
clear_before_quit = False
def refreshF2boptions():
global f2boptions
global quit_now
global exit_code
f2boptions = {}
if not r.get('F2B_OPTIONS'):
f2boptions['ban_time'] = r.get('F2B_BAN_TIME')
f2boptions['max_ban_time'] = r.get('F2B_MAX_BAN_TIME')
f2boptions['ban_time_increment'] = r.get('F2B_BAN_TIME_INCREMENT')
f2boptions['max_attempts'] = r.get('F2B_MAX_ATTEMPTS')
f2boptions['retry_window'] = r.get('F2B_RETRY_WINDOW')
f2boptions['netban_ipv4'] = r.get('F2B_NETBAN_IPV4')
f2boptions['netban_ipv6'] = r.get('F2B_NETBAN_IPV6')
else:
try:
f2boptions = json.loads(r.get('F2B_OPTIONS'))
except ValueError:
logger.logCrit('Error loading F2B options: F2B_OPTIONS is not json')
quit_now = True
exit_code = 2
verifyF2boptions(f2boptions)
r.set('F2B_OPTIONS', json.dumps(f2boptions, ensure_ascii=False))
def verifyF2boptions(f2boptions):
verifyF2boption(f2boptions,'ban_time', 1800)
verifyF2boption(f2boptions,'max_ban_time', 10000)
verifyF2boption(f2boptions,'ban_time_increment', True)
verifyF2boption(f2boptions,'max_attempts', 10)
verifyF2boption(f2boptions,'retry_window', 600)
verifyF2boption(f2boptions,'netban_ipv4', 32)
verifyF2boption(f2boptions,'netban_ipv6', 128)
verifyF2boption(f2boptions,'banlist_id', str(uuid.uuid4()))
verifyF2boption(f2boptions,'manage_external', 0)
def verifyF2boption(f2boptions, f2boption, f2bdefault):
f2boptions[f2boption] = f2boptions[f2boption] if f2boption in f2boptions and f2boptions[f2boption] is not None else f2bdefault
def refreshF2bregex():
global f2bregex
global quit_now
global exit_code
if not r.get('F2B_REGEX'):
f2bregex = {}
f2bregex[1] = r'mailcow UI: Invalid password for .+ by ([0-9a-f\.:]+)'
f2bregex[2] = r'Rspamd UI: Invalid password by ([0-9a-f\.:]+)'
f2bregex[3] = r'warning: .*\[([0-9a-f\.:]+)\]: SASL .+ authentication failed: (?!.*Connection lost to authentication server).+'
f2bregex[4] = r'warning: non-SMTP command from .*\[([0-9a-f\.:]+)]:.+'
f2bregex[5] = r'NOQUEUE: reject: RCPT from \[([0-9a-f\.:]+)].+Protocol error.+'
f2bregex[6] = r'-login: Disconnected.+ \(auth failed, .+\): user=.*, method=.+, rip=([0-9a-f\.:]+),'
f2bregex[7] = r'-login: Aborted login.+ \(auth failed .+\): user=.+, rip=([0-9a-f\.:]+), lip.+'
f2bregex[8] = r'-login: Aborted login.+ \(tried to use disallowed .+\): user=.+, rip=([0-9a-f\.:]+), lip.+'
f2bregex[9] = r'SOGo.+ Login from \'([0-9a-f\.:]+)\' for user .+ might not have worked'
f2bregex[10] = r'([0-9a-f\.:]+) \"GET \/SOGo\/.* HTTP.+\" 403 .+'
r.set('F2B_REGEX', json.dumps(f2bregex, ensure_ascii=False))
else:
try:
f2bregex = {}
f2bregex = json.loads(r.get('F2B_REGEX'))
except ValueError:
logger.logCrit('Error loading F2B options: F2B_REGEX is not json')
quit_now = True
exit_code = 2
def get_ip(address):
ip = ipaddress.ip_address(address)
if type(ip) is ipaddress.IPv6Address and ip.ipv4_mapped:
ip = ip.ipv4_mapped
if ip.is_private or ip.is_loopback:
return False
return ip
def ban(address):
global f2boptions
global lock
refreshF2boptions()
MAX_ATTEMPTS = int(f2boptions['max_attempts'])
RETRY_WINDOW = int(f2boptions['retry_window'])
NETBAN_IPV4 = '/' + str(f2boptions['netban_ipv4'])
NETBAN_IPV6 = '/' + str(f2boptions['netban_ipv6'])
ip = get_ip(address)
if not ip: return
address = str(ip)
self_network = ipaddress.ip_network(address)
with lock:
temp_whitelist = set(WHITELIST)
if temp_whitelist:
for wl_key in temp_whitelist:
wl_net = ipaddress.ip_network(wl_key, False)
if wl_net.overlaps(self_network):
logger.logInfo('Address %s is whitelisted by rule %s' % (self_network, wl_net))
return
net = ipaddress.ip_network((address + (NETBAN_IPV4 if type(ip) is ipaddress.IPv4Address else NETBAN_IPV6)), strict=False)
net = str(net)
if not net in bans:
bans[net] = {'attempts': 0, 'last_attempt': 0, 'ban_counter': 0}
current_attempt = time.time()
if current_attempt - bans[net]['last_attempt'] > RETRY_WINDOW:
bans[net]['attempts'] = 0
bans[net]['attempts'] += 1
bans[net]['last_attempt'] = current_attempt
if bans[net]['attempts'] >= MAX_ATTEMPTS:
cur_time = int(round(time.time()))
NET_BAN_TIME = calcNetBanTime(bans[net]['ban_counter'])
logger.logCrit('Banning %s for %d minutes' % (net, NET_BAN_TIME / 60 ))
if type(ip) is ipaddress.IPv4Address and int(f2boptions['manage_external']) != 1:
with lock:
tables.banIPv4(net)
elif int(f2boptions['manage_external']) != 1:
with lock:
tables.banIPv6(net)
r.hset('F2B_ACTIVE_BANS', '%s' % net, cur_time + NET_BAN_TIME)
else:
logger.logWarn('%d more attempts in the next %d seconds until %s is banned' % (MAX_ATTEMPTS - bans[net]['attempts'], RETRY_WINDOW, net))
def unban(net):
global lock
if not net in bans:
logger.logInfo('%s is not banned, skipping unban and deleting from queue (if any)' % net)
r.hdel('F2B_QUEUE_UNBAN', '%s' % net)
return
logger.logInfo('Unbanning %s' % net)
if type(ipaddress.ip_network(net)) is ipaddress.IPv4Network:
with lock:
tables.unbanIPv4(net)
else:
with lock:
tables.unbanIPv6(net)
r.hdel('F2B_ACTIVE_BANS', '%s' % net)
r.hdel('F2B_QUEUE_UNBAN', '%s' % net)
if net in bans:
bans[net]['attempts'] = 0
bans[net]['ban_counter'] += 1
def permBan(net, unban=False):
global f2boptions
global lock
is_unbanned = False
is_banned = False
if type(ipaddress.ip_network(net, strict=False)) is ipaddress.IPv4Network:
with lock:
if unban:
is_unbanned = tables.unbanIPv4(net)
elif int(f2boptions['manage_external']) != 1:
is_banned = tables.banIPv4(net)
else:
with lock:
if unban:
is_unbanned = tables.unbanIPv6(net)
elif int(f2boptions['manage_external']) != 1:
is_banned = tables.banIPv6(net)
if is_unbanned:
r.hdel('F2B_PERM_BANS', '%s' % net)
logger.logCrit('Removed host/network %s from blacklist' % net)
elif is_banned:
r.hset('F2B_PERM_BANS', '%s' % net, int(round(time.time())))
logger.logCrit('Added host/network %s to blacklist' % net)
def clear():
global lock
logger.logInfo('Clearing all bans')
for net in bans.copy():
unban(net)
with lock:
tables.clearIPv4Table()
tables.clearIPv6Table()
try:
if r is not None:
r.delete('F2B_ACTIVE_BANS')
r.delete('F2B_PERM_BANS')
except Exception as ex:
logger.logWarn('Error clearing redis keys F2B_ACTIVE_BANS and F2B_PERM_BANS: %s' % ex)
def watch():
global pubsub
global quit_now
global exit_code
logger.logInfo('Watching Redis channel F2B_CHANNEL')
pubsub.subscribe('F2B_CHANNEL')
while not quit_now:
try:
for item in pubsub.listen():
refreshF2bregex()
for rule_id, rule_regex in f2bregex.items():
if item['data'] and item['type'] == 'message':
try:
result = re.search(rule_regex, item['data'])
except re.error:
result = False
if result:
addr = result.group(1)
ip = ipaddress.ip_address(addr)
if ip.is_private or ip.is_loopback:
continue
logger.logWarn('%s matched rule id %s (%s)' % (addr, rule_id, item['data']))
ban(addr)
except Exception as ex:
logger.logWarn('Error reading log line from pubsub: %s' % ex)
pubsub = None
quit_now = True
exit_code = 2
def snat4(snat_target):
global lock
global quit_now
while not quit_now:
time.sleep(10)
with lock:
tables.snat4(snat_target, os.getenv('IPV4_NETWORK', '172.22.1') + '.0/24')
def snat6(snat_target):
global lock
global quit_now
while not quit_now:
time.sleep(10)
with lock:
tables.snat6(snat_target, os.getenv('IPV6_NETWORK', 'fd4d:6169:6c63:6f77::/64'))
def autopurge():
global f2boptions
while not quit_now:
time.sleep(10)
refreshF2boptions()
MAX_ATTEMPTS = int(f2boptions['max_attempts'])
QUEUE_UNBAN = r.hgetall('F2B_QUEUE_UNBAN')
if QUEUE_UNBAN:
for net in QUEUE_UNBAN:
unban(str(net))
for net in bans.copy():
if bans[net]['attempts'] >= MAX_ATTEMPTS:
NET_BAN_TIME = calcNetBanTime(bans[net]['ban_counter'])
TIME_SINCE_LAST_ATTEMPT = time.time() - bans[net]['last_attempt']
if TIME_SINCE_LAST_ATTEMPT > NET_BAN_TIME:
unban(net)
def mailcowChainOrder():
global lock
global quit_now
global exit_code
while not quit_now:
time.sleep(10)
with lock:
quit_now, exit_code = tables.checkIPv4ChainOrder()
if quit_now: return
quit_now, exit_code = tables.checkIPv6ChainOrder()
def calcNetBanTime(ban_counter):
global f2boptions
BAN_TIME = int(f2boptions['ban_time'])
MAX_BAN_TIME = int(f2boptions['max_ban_time'])
BAN_TIME_INCREMENT = bool(f2boptions['ban_time_increment'])
NET_BAN_TIME = BAN_TIME if not BAN_TIME_INCREMENT else BAN_TIME * 2 ** ban_counter
NET_BAN_TIME = max([BAN_TIME, min([NET_BAN_TIME, MAX_BAN_TIME])])
return NET_BAN_TIME
def isIpNetwork(address):
try:
ipaddress.ip_network(address, False)
except ValueError:
return False
return True
def genNetworkList(list):
resolver = dns.resolver.Resolver()
hostnames = []
networks = []
for key in list:
if isIpNetwork(key):
networks.append(key)
else:
hostnames.append(key)
for hostname in hostnames:
hostname_ips = []
for rdtype in ['A', 'AAAA']:
try:
answer = resolver.resolve(qname=hostname, rdtype=rdtype, lifetime=3)
except dns.exception.Timeout:
logger.logInfo('Hostname %s timedout on resolve' % hostname)
break
except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
continue
except dns.exception.DNSException as dnsexception:
logger.logInfo('%s' % dnsexception)
continue
for rdata in answer:
hostname_ips.append(rdata.to_text())
networks.extend(hostname_ips)
return set(networks)
def whitelistUpdate():
global lock
global quit_now
global WHITELIST
while not quit_now:
start_time = time.time()
list = r.hgetall('F2B_WHITELIST')
new_whitelist = []
if list:
new_whitelist = genNetworkList(list)
with lock:
if Counter(new_whitelist) != Counter(WHITELIST):
WHITELIST = new_whitelist
logger.logInfo('Whitelist was changed, it has %s entries' % len(WHITELIST))
time.sleep(60.0 - ((time.time() - start_time) % 60.0))
def blacklistUpdate():
global quit_now
global BLACKLIST
while not quit_now:
start_time = time.time()
list = r.hgetall('F2B_BLACKLIST')
new_blacklist = []
if list:
new_blacklist = genNetworkList(list)
if Counter(new_blacklist) != Counter(BLACKLIST):
addban = set(new_blacklist).difference(BLACKLIST)
delban = set(BLACKLIST).difference(new_blacklist)
BLACKLIST = new_blacklist
logger.logInfo('Blacklist was changed, it has %s entries' % len(BLACKLIST))
if addban:
for net in addban:
permBan(net=net)
if delban:
for net in delban:
permBan(net=net, unban=True)
time.sleep(60.0 - ((time.time() - start_time) % 60.0))
def sigterm_quit(signum, frame):
global clear_before_quit
clear_before_quit = True
sys.exit(exit_code)
def berfore_quit():
if clear_before_quit:
clear()
if pubsub is not None:
pubsub.unsubscribe()
if __name__ == '__main__':
atexit.register(berfore_quit)
signal.signal(signal.SIGTERM, sigterm_quit)
# init Logger
logger = Logger()
# init backend
backend = sys.argv[1]
if backend == "nftables":
logger.logInfo('Using NFTables backend')
tables = NFTables(chain_name, logger)
else:
logger.logInfo('Using IPTables backend')
tables = IPTables(chain_name, logger)
# In case a previous session was killed without cleanup
clear()
# Reinit MAILCOW chain
# Is called before threads start, no locking
logger.logInfo("Initializing mailcow netfilter chain")
tables.initChainIPv4()
tables.initChainIPv6()
if os.getenv("DISABLE_NETFILTER_ISOLATION_RULE").lower() in ("y", "yes"):
logger.logInfo(f"Skipping {chain_name} isolation")
else:
logger.logInfo(f"Setting {chain_name} isolation")
tables.create_mailcow_isolation_rule("br-mailcow", [3306, 6379, 8983, 12345], os.getenv("MAILCOW_REPLICA_IP"))
# connect to redis
while True:
try:
redis_slaveof_ip = os.getenv('REDIS_SLAVEOF_IP', '')
redis_slaveof_port = os.getenv('REDIS_SLAVEOF_PORT', '')
if "".__eq__(redis_slaveof_ip):
r = redis.StrictRedis(host=os.getenv('IPV4_NETWORK', '172.22.1') + '.249', decode_responses=True, port=6379, db=0)
else:
r = redis.StrictRedis(host=redis_slaveof_ip, decode_responses=True, port=redis_slaveof_port, db=0)
r.ping()
pubsub = r.pubsub()
except Exception as ex:
print('%s - trying again in 3 seconds' % (ex))
time.sleep(3)
else:
break
logger.set_redis(r)
# rename fail2ban to netfilter
if r.exists('F2B_LOG'):
r.rename('F2B_LOG', 'NETFILTER_LOG')
# clear bans in redis
r.delete('F2B_ACTIVE_BANS')
r.delete('F2B_PERM_BANS')
refreshF2boptions()
watch_thread = Thread(target=watch)
watch_thread.daemon = True
watch_thread.start()
if os.getenv('SNAT_TO_SOURCE') and os.getenv('SNAT_TO_SOURCE') != 'n':
try:
snat_ip = os.getenv('SNAT_TO_SOURCE')
snat_ipo = ipaddress.ip_address(snat_ip)
if type(snat_ipo) is ipaddress.IPv4Address:
snat4_thread = Thread(target=snat4,args=(snat_ip,))
snat4_thread.daemon = True
snat4_thread.start()
except ValueError:
print(os.getenv('SNAT_TO_SOURCE') + ' is not a valid IPv4 address')
if os.getenv('SNAT6_TO_SOURCE') and os.getenv('SNAT6_TO_SOURCE') != 'n':
try:
snat_ip = os.getenv('SNAT6_TO_SOURCE')
snat_ipo = ipaddress.ip_address(snat_ip)
if type(snat_ipo) is ipaddress.IPv6Address:
snat6_thread = Thread(target=snat6,args=(snat_ip,))
snat6_thread.daemon = True
snat6_thread.start()
except ValueError:
print(os.getenv('SNAT6_TO_SOURCE') + ' is not a valid IPv6 address')
autopurge_thread = Thread(target=autopurge)
autopurge_thread.daemon = True
autopurge_thread.start()
mailcowchainwatch_thread = Thread(target=mailcowChainOrder)
mailcowchainwatch_thread.daemon = True
mailcowchainwatch_thread.start()
blacklistupdate_thread = Thread(target=blacklistUpdate)
blacklistupdate_thread.daemon = True
blacklistupdate_thread.start()
whitelistupdate_thread = Thread(target=whitelistUpdate)
whitelistupdate_thread.daemon = True
whitelistupdate_thread.start()
while not quit_now:
time.sleep(0.5)
sys.exit(exit_code)

View file

@ -0,0 +1,252 @@
import iptc
import time
import os
class IPTables:
def __init__(self, chain_name, logger):
self.chain_name = chain_name
self.logger = logger
def initChainIPv4(self):
if not iptc.Chain(iptc.Table(iptc.Table.FILTER), self.chain_name) in iptc.Table(iptc.Table.FILTER).chains:
iptc.Table(iptc.Table.FILTER).create_chain(self.chain_name)
for c in ['FORWARD', 'INPUT']:
chain = iptc.Chain(iptc.Table(iptc.Table.FILTER), c)
rule = iptc.Rule()
rule.src = '0.0.0.0/0'
rule.dst = '0.0.0.0/0'
target = iptc.Target(rule, self.chain_name)
rule.target = target
if rule not in chain.rules:
chain.insert_rule(rule)
def initChainIPv6(self):
if not iptc.Chain(iptc.Table6(iptc.Table6.FILTER), self.chain_name) in iptc.Table6(iptc.Table6.FILTER).chains:
iptc.Table6(iptc.Table6.FILTER).create_chain(self.chain_name)
for c in ['FORWARD', 'INPUT']:
chain = iptc.Chain(iptc.Table6(iptc.Table6.FILTER), c)
rule = iptc.Rule6()
rule.src = '::/0'
rule.dst = '::/0'
target = iptc.Target(rule, self.chain_name)
rule.target = target
if rule not in chain.rules:
chain.insert_rule(rule)
def checkIPv4ChainOrder(self):
filter_table = iptc.Table(iptc.Table.FILTER)
filter_table.refresh()
return self.checkChainOrder(filter_table)
def checkIPv6ChainOrder(self):
filter_table = iptc.Table6(iptc.Table6.FILTER)
filter_table.refresh()
return self.checkChainOrder(filter_table)
def checkChainOrder(self, filter_table):
err = False
exit_code = None
forward_chain = iptc.Chain(filter_table, 'FORWARD')
input_chain = iptc.Chain(filter_table, 'INPUT')
for chain in [forward_chain, input_chain]:
target_found = False
for position, item in enumerate(chain.rules):
if item.target.name == self.chain_name:
target_found = True
if position > 2:
self.logger.logCrit('Error in %s chain: %s target not found, restarting container' % (chain.name, self.chain_name))
err = True
exit_code = 2
if not target_found:
self.logger.logCrit('Error in %s chain: %s target not found, restarting container' % (chain.name, self.chain_name))
err = True
exit_code = 2
return err, exit_code
def clearIPv4Table(self):
self.clearTable(iptc.Table(iptc.Table.FILTER))
def clearIPv6Table(self):
self.clearTable(iptc.Table6(iptc.Table6.FILTER))
def clearTable(self, filter_table):
filter_table.autocommit = False
forward_chain = iptc.Chain(filter_table, "FORWARD")
input_chain = iptc.Chain(filter_table, "INPUT")
mailcow_chain = iptc.Chain(filter_table, self.chain_name)
if mailcow_chain in filter_table.chains:
for rule in mailcow_chain.rules:
mailcow_chain.delete_rule(rule)
for rule in forward_chain.rules:
if rule.target.name == self.chain_name:
forward_chain.delete_rule(rule)
for rule in input_chain.rules:
if rule.target.name == self.chain_name:
input_chain.delete_rule(rule)
filter_table.delete_chain(self.chain_name)
filter_table.commit()
filter_table.refresh()
filter_table.autocommit = True
def banIPv4(self, source):
chain = iptc.Chain(iptc.Table(iptc.Table.FILTER), self.chain_name)
rule = iptc.Rule()
rule.src = source
target = iptc.Target(rule, "REJECT")
rule.target = target
if rule in chain.rules:
return False
chain.insert_rule(rule)
return True
def banIPv6(self, source):
chain = iptc.Chain(iptc.Table6(iptc.Table6.FILTER), self.chain_name)
rule = iptc.Rule6()
rule.src = source
target = iptc.Target(rule, "REJECT")
rule.target = target
if rule in chain.rules:
return False
chain.insert_rule(rule)
return True
def unbanIPv4(self, source):
chain = iptc.Chain(iptc.Table(iptc.Table.FILTER), self.chain_name)
rule = iptc.Rule()
rule.src = source
target = iptc.Target(rule, "REJECT")
rule.target = target
if rule not in chain.rules:
return False
chain.delete_rule(rule)
return True
def unbanIPv6(self, source):
chain = iptc.Chain(iptc.Table6(iptc.Table6.FILTER), self.chain_name)
rule = iptc.Rule6()
rule.src = source
target = iptc.Target(rule, "REJECT")
rule.target = target
if rule not in chain.rules:
return False
chain.delete_rule(rule)
return True
def snat4(self, snat_target, source):
try:
table = iptc.Table('nat')
table.refresh()
chain = iptc.Chain(table, 'POSTROUTING')
table.autocommit = False
new_rule = self.getSnat4Rule(snat_target, source)
if not chain.rules:
# if there are no rules in the chain, insert the new rule directly
self.logger.logInfo(f'Added POSTROUTING rule for source network {new_rule.src} to SNAT target {snat_target}')
chain.insert_rule(new_rule)
else:
for position, rule in enumerate(chain.rules):
if not hasattr(rule.target, 'parameter'):
continue
match = all((
new_rule.get_src() == rule.get_src(),
new_rule.get_dst() == rule.get_dst(),
new_rule.target.parameters == rule.target.parameters,
new_rule.target.name == rule.target.name
))
if position == 0:
if not match:
self.logger.logInfo(f'Added POSTROUTING rule for source network {new_rule.src} to SNAT target {snat_target}')
chain.insert_rule(new_rule)
else:
if match:
self.logger.logInfo(f'Remove rule for source network {new_rule.src} to SNAT target {snat_target} from POSTROUTING chain at position {position}')
chain.delete_rule(rule)
table.commit()
table.autocommit = True
return True
except:
self.logger.logCrit('Error running SNAT4, retrying...')
return False
def snat6(self, snat_target, source):
try:
table = iptc.Table6('nat')
table.refresh()
chain = iptc.Chain(table, 'POSTROUTING')
table.autocommit = False
new_rule = self.getSnat6Rule(snat_target, source)
if new_rule not in chain.rules:
self.logger.logInfo('Added POSTROUTING rule for source network %s to SNAT target %s' % (new_rule.src, snat_target))
chain.insert_rule(new_rule)
else:
for position, item in enumerate(chain.rules):
if item == new_rule:
if position != 0:
chain.delete_rule(new_rule)
table.commit()
table.autocommit = True
except:
self.logger.logCrit('Error running SNAT6, retrying...')
def getSnat4Rule(self, snat_target, source):
rule = iptc.Rule()
rule.src = source
rule.dst = '!' + rule.src
target = rule.create_target("SNAT")
target.to_source = snat_target
match = rule.create_match("comment")
match.comment = f'{int(round(time.time()))}'
return rule
def getSnat6Rule(self, snat_target, source):
rule = iptc.Rule6()
rule.src = source
rule.dst = '!' + rule.src
target = rule.create_target("SNAT")
target.to_source = snat_target
return rule
def create_mailcow_isolation_rule(self, _interface:str, _dports:list, _allow:str = ""):
try:
chain = iptc.Chain(iptc.Table(iptc.Table.FILTER), self.chain_name)
# insert mailcow isolation rule
rule = iptc.Rule()
rule.in_interface = f'!{_interface}'
rule.out_interface = _interface
rule.protocol = 'tcp'
rule.create_target("DROP")
match = rule.create_match("multiport")
match.dports = ','.join(map(str, _dports))
if rule in chain.rules:
chain.delete_rule(rule)
chain.insert_rule(rule, position=0)
# insert mailcow isolation exception rule
if _allow != "":
rule = iptc.Rule()
rule.src = _allow
rule.in_interface = f'!{_interface}'
rule.out_interface = _interface
rule.protocol = 'tcp'
rule.create_target("ACCEPT")
match = rule.create_match("multiport")
match.dports = ','.join(map(str, _dports))
if rule in chain.rules:
chain.delete_rule(rule)
chain.insert_rule(rule, position=0)
return True
except Exception as e:
self.logger.logCrit(f"Error adding {self.chain_name} isolation: {e}")
return False

View file

@ -0,0 +1,30 @@
import time
import json
class Logger:
def __init__(self):
self.r = None
def set_redis(self, redis):
self.r = redis
def log(self, priority, message):
tolog = {}
tolog['time'] = int(round(time.time()))
tolog['priority'] = priority
tolog['message'] = message
print(message)
if self.r is not None:
try:
self.r.lpush('NETFILTER_LOG', json.dumps(tolog, ensure_ascii=False))
except Exception as ex:
print('Failed logging to redis: %s' % (ex))
def logWarn(self, message):
self.log('warn', message)
def logCrit(self, message):
self.log('crit', message)
def logInfo(self, message):
self.log('info', message)

View file

@ -0,0 +1,659 @@
import nftables
import ipaddress
import os
class NFTables:
def __init__(self, chain_name, logger):
self.chain_name = chain_name
self.logger = logger
self.nft = nftables.Nftables()
self.nft.set_json_output(True)
self.nft.set_handle_output(True)
self.nft_chain_names = {'ip': {'filter': {'input': '', 'forward': ''}, 'nat': {'postrouting': ''} },
'ip6': {'filter': {'input': '', 'forward': ''}, 'nat': {'postrouting': ''} } }
self.search_current_chains()
def initChainIPv4(self):
self.insert_mailcow_chains("ip")
def initChainIPv6(self):
self.insert_mailcow_chains("ip6")
def checkIPv4ChainOrder(self):
return self.checkChainOrder("ip")
def checkIPv6ChainOrder(self):
return self.checkChainOrder("ip6")
def checkChainOrder(self, filter_table):
err = False
exit_code = None
for chain in ['input', 'forward']:
chain_position = self.check_mailcow_chains(filter_table, chain)
if chain_position is None: continue
if chain_position is False:
self.logger.logCrit(f'MAILCOW target not found in {filter_table} {chain} table, restarting container to fix it...')
err = True
exit_code = 2
if chain_position > 0:
chain_position += 1
self.logger.logCrit(f'MAILCOW target is in position {chain_position} in the {filter_table} {chain} table, restarting container to fix it...')
err = True
exit_code = 2
return err, exit_code
def clearIPv4Table(self):
self.clearTable("ip")
def clearIPv6Table(self):
self.clearTable("ip6")
def clearTable(self, _family):
is_empty_dict = True
json_command = self.get_base_dict()
chain_handle = self.get_chain_handle(_family, "filter", self.chain_name)
# if no handle, the chain doesn't exists
if chain_handle is not None:
is_empty_dict = False
# flush chain
mailcow_chain = {'family': _family, 'table': 'filter', 'name': self.chain_name}
flush_chain = {'flush': {'chain': mailcow_chain}}
json_command["nftables"].append(flush_chain)
# remove rule in forward chain
# remove rule in input chain
chains_family = [self.nft_chain_names[_family]['filter']['input'],
self.nft_chain_names[_family]['filter']['forward'] ]
for chain_base in chains_family:
if not chain_base: continue
rules_handle = self.get_rules_handle(_family, "filter", chain_base)
if rules_handle is not None:
for r_handle in rules_handle:
is_empty_dict = False
mailcow_rule = {'family':_family,
'table': 'filter',
'chain': chain_base,
'handle': r_handle }
delete_rules = {'delete': {'rule': mailcow_rule} }
json_command["nftables"].append(delete_rules)
# remove chain
# after delete all rules referencing this chain
if chain_handle is not None:
mc_chain_handle = {'family':_family,
'table': 'filter',
'name': self.chain_name,
'handle': chain_handle }
delete_chain = {'delete': {'chain': mc_chain_handle} }
json_command["nftables"].append(delete_chain)
if is_empty_dict == False:
if self.nft_exec_dict(json_command):
self.logger.logInfo(f"Clear completed: {_family}")
def banIPv4(self, source):
ban_dict = self.get_ban_ip_dict(source, "ip")
return self.nft_exec_dict(ban_dict)
def banIPv6(self, source):
ban_dict = self.get_ban_ip_dict(source, "ip6")
return self.nft_exec_dict(ban_dict)
def unbanIPv4(self, source):
unban_dict = self.get_unban_ip_dict(source, "ip")
if not unban_dict:
return False
return self.nft_exec_dict(unban_dict)
def unbanIPv6(self, source):
unban_dict = self.get_unban_ip_dict(source, "ip6")
if not unban_dict:
return False
return self.nft_exec_dict(unban_dict)
def snat4(self, snat_target, source):
self.snat_rule("ip", snat_target, source)
def snat6(self, snat_target, source):
self.snat_rule("ip6", snat_target, source)
def nft_exec_dict(self, query: dict):
if not query: return False
rc, output, error = self.nft.json_cmd(query)
if rc != 0:
#self.logger.logCrit(f"Nftables Error: {error}")
return False
# Prevent returning False or empty string on commands that do not produce output
if rc == 0 and len(output) == 0:
return True
return output
def get_base_dict(self):
return {'nftables': [{ 'metainfo': { 'json_schema_version': 1} } ] }
def search_current_chains(self):
nft_chain_priority = {'ip': {'filter': {'input': None, 'forward': None}, 'nat': {'postrouting': None} },
'ip6': {'filter': {'input': None, 'forward': None}, 'nat': {'postrouting': None} } }
# Command: 'nft list chains'
_list = {'list' : {'chains': 'null'} }
command = self.get_base_dict()
command['nftables'].append(_list)
kernel_ruleset = self.nft_exec_dict(command)
if kernel_ruleset:
for _object in kernel_ruleset['nftables']:
chain = _object.get("chain")
if not chain: continue
_family = chain['family']
_table = chain['table']
_hook = chain.get("hook")
_priority = chain.get("prio")
_name = chain['name']
if _family not in self.nft_chain_names: continue
if _table not in self.nft_chain_names[_family]: continue
if _hook not in self.nft_chain_names[_family][_table]: continue
if _priority is None: continue
_saved_priority = nft_chain_priority[_family][_table][_hook]
if _saved_priority is None or _priority < _saved_priority:
# at this point, we know the chain has:
# hook and priority set
# and it has the lowest priority
nft_chain_priority[_family][_table][_hook] = _priority
self.nft_chain_names[_family][_table][_hook] = _name
def search_for_chain(self, kernel_ruleset: dict, chain_name: str):
found = False
for _object in kernel_ruleset["nftables"]:
chain = _object.get("chain")
if not chain:
continue
ch_name = chain.get("name")
if ch_name == chain_name:
found = True
break
return found
def get_chain_dict(self, _family: str, _name: str):
# nft (add | create) chain [<family>] <table> <name>
_chain_opts = {'family': _family, 'table': 'filter', 'name': _name }
_add = {'add': {'chain': _chain_opts} }
final_chain = self.get_base_dict()
final_chain["nftables"].append(_add)
return final_chain
def get_mailcow_jump_rule_dict(self, _family: str, _chain: str):
_jump_rule = self.get_base_dict()
_expr_opt=[]
_expr_counter = {'family': _family, 'table': 'filter', 'packets': 0, 'bytes': 0}
_counter_dict = {'counter': _expr_counter}
_expr_opt.append(_counter_dict)
_jump_opts = {'jump': {'target': self.chain_name} }
_expr_opt.append(_jump_opts)
_rule_params = {'family': _family,
'table': 'filter',
'chain': _chain,
'expr': _expr_opt,
'comment': "mailcow" }
_add_rule = {'insert': {'rule': _rule_params} }
_jump_rule["nftables"].append(_add_rule)
return _jump_rule
def insert_mailcow_chains(self, _family: str):
nft_input_chain = self.nft_chain_names[_family]['filter']['input']
nft_forward_chain = self.nft_chain_names[_family]['filter']['forward']
# Command: 'nft list table <family> filter'
_table_opts = {'family': _family, 'name': 'filter'}
_list = {'list': {'table': _table_opts} }
command = self.get_base_dict()
command['nftables'].append(_list)
kernel_ruleset = self.nft_exec_dict(command)
if kernel_ruleset:
# chain
if not self.search_for_chain(kernel_ruleset, self.chain_name):
cadena = self.get_chain_dict(_family, self.chain_name)
if self.nft_exec_dict(cadena):
self.logger.logInfo(f"MAILCOW {_family} chain created successfully.")
input_jump_found, forward_jump_found = False, False
for _object in kernel_ruleset["nftables"]:
if not _object.get("rule"):
continue
rule = _object["rule"]
if nft_input_chain and rule["chain"] == nft_input_chain:
if rule.get("comment") and rule["comment"] == "mailcow":
input_jump_found = True
if nft_forward_chain and rule["chain"] == nft_forward_chain:
if rule.get("comment") and rule["comment"] == "mailcow":
forward_jump_found = True
if not input_jump_found:
command = self.get_mailcow_jump_rule_dict(_family, nft_input_chain)
self.nft_exec_dict(command)
if not forward_jump_found:
command = self.get_mailcow_jump_rule_dict(_family, nft_forward_chain)
self.nft_exec_dict(command)
def delete_nat_rule(self, _family:str, _chain: str, _handle:str):
delete_command = self.get_base_dict()
_rule_opts = {'family': _family,
'table': 'nat',
'chain': _chain,
'handle': _handle }
_delete = {'delete': {'rule': _rule_opts} }
delete_command["nftables"].append(_delete)
return self.nft_exec_dict(delete_command)
def delete_filter_rule(self, _family:str, _chain: str, _handle:str):
delete_command = self.get_base_dict()
_rule_opts = {'family': _family,
'table': 'filter',
'chain': _chain,
'handle': _handle }
_delete = {'delete': {'rule': _rule_opts} }
delete_command["nftables"].append(_delete)
return self.nft_exec_dict(delete_command)
def snat_rule(self, _family: str, snat_target: str, source_address: str):
chain_name = self.nft_chain_names[_family]['nat']['postrouting']
# no postrouting chain, may occur if docker has ipv6 disabled.
if not chain_name: return
# Command: nft list chain <family> nat <chain_name>
_chain_opts = {'family': _family, 'table': 'nat', 'name': chain_name}
_list = {'list':{'chain': _chain_opts} }
command = self.get_base_dict()
command['nftables'].append(_list)
kernel_ruleset = self.nft_exec_dict(command)
if not kernel_ruleset:
return
rule_position = 0
rule_handle = None
rule_found = False
for _object in kernel_ruleset["nftables"]:
if not _object.get("rule"):
continue
rule = _object["rule"]
if not rule.get("comment") or not rule["comment"] == "mailcow":
rule_position +=1
continue
rule_found = True
rule_handle = rule["handle"]
break
dest_net = ipaddress.ip_network(source_address, strict=False)
target_net = ipaddress.ip_network(snat_target, strict=False)
if rule_found:
saddr_ip = rule["expr"][0]["match"]["right"]["prefix"]["addr"]
saddr_len = int(rule["expr"][0]["match"]["right"]["prefix"]["len"])
daddr_ip = rule["expr"][1]["match"]["right"]["prefix"]["addr"]
daddr_len = int(rule["expr"][1]["match"]["right"]["prefix"]["len"])
target_ip = rule["expr"][3]["snat"]["addr"]
saddr_net = ipaddress.ip_network(saddr_ip + '/' + str(saddr_len), strict=False)
daddr_net = ipaddress.ip_network(daddr_ip + '/' + str(daddr_len), strict=False)
current_target_net = ipaddress.ip_network(target_ip, strict=False)
match = all((
dest_net == saddr_net,
dest_net == daddr_net,
target_net == current_target_net
))
try:
if rule_position == 0:
if not match:
# Position 0 , it is a mailcow rule , but it does not have the same parameters
if self.delete_nat_rule(_family, chain_name, rule_handle):
self.logger.logInfo(f'Remove rule for source network {saddr_net} to SNAT target {target_net} from {_family} nat {chain_name} chain, rule does not match configured parameters')
else:
# Position > 0 and is mailcow rule
if self.delete_nat_rule(_family, chain_name, rule_handle):
self.logger.logInfo(f'Remove rule for source network {saddr_net} to SNAT target {target_net} from {_family} nat {chain_name} chain, rule is at position {rule_position}')
except:
self.logger.logCrit(f"Error running SNAT on {_family}, retrying..." )
else:
# rule not found
json_command = self.get_base_dict()
try:
snat_dict = {'snat': {'addr': str(target_net.network_address)} }
expr_counter = {'family': _family, 'table': 'nat', 'packets': 0, 'bytes': 0}
counter_dict = {'counter': expr_counter}
prefix_dict = {'prefix': {'addr': str(dest_net.network_address), 'len': int(dest_net.prefixlen)} }
payload_dict = {'payload': {'protocol': _family, 'field': "saddr"} }
match_dict1 = {'match': {'op': '==', 'left': payload_dict, 'right': prefix_dict} }
payload_dict2 = {'payload': {'protocol': _family, 'field': "daddr"} }
match_dict2 = {'match': {'op': '!=', 'left': payload_dict2, 'right': prefix_dict } }
expr_list = [
match_dict1,
match_dict2,
counter_dict,
snat_dict
]
rule_fields = {'family': _family,
'table': 'nat',
'chain': chain_name,
'comment': "mailcow",
'expr': expr_list }
insert_dict = {'insert': {'rule': rule_fields} }
json_command["nftables"].append(insert_dict)
if self.nft_exec_dict(json_command):
self.logger.logInfo(f'Added {_family} nat {chain_name} rule for source network {dest_net} to {target_net}')
except:
self.logger.logCrit(f"Error running SNAT on {_family}, retrying...")
def get_chain_handle(self, _family: str, _table: str, chain_name: str):
chain_handle = None
# Command: 'nft list chains {family}'
_list = {'list': {'chains': {'family': _family} } }
command = self.get_base_dict()
command['nftables'].append(_list)
kernel_ruleset = self.nft_exec_dict(command)
if kernel_ruleset:
for _object in kernel_ruleset["nftables"]:
if not _object.get("chain"):
continue
chain = _object["chain"]
if chain["family"] == _family and chain["table"] == _table and chain["name"] == chain_name:
chain_handle = chain["handle"]
break
return chain_handle
def get_rules_handle(self, _family: str, _table: str, chain_name: str, _comment_filter = "mailcow"):
rule_handle = []
# Command: 'nft list chain {family} {table} {chain_name}'
_chain_opts = {'family': _family, 'table': _table, 'name': chain_name}
_list = {'list': {'chain': _chain_opts} }
command = self.get_base_dict()
command['nftables'].append(_list)
kernel_ruleset = self.nft_exec_dict(command)
if kernel_ruleset:
for _object in kernel_ruleset["nftables"]:
if not _object.get("rule"):
continue
rule = _object["rule"]
if rule["family"] == _family and rule["table"] == _table and rule["chain"] == chain_name:
if rule.get("comment") and rule["comment"] == _comment_filter:
rule_handle.append(rule["handle"])
return rule_handle
def get_ban_ip_dict(self, ipaddr: str, _family: str):
json_command = self.get_base_dict()
expr_opt = []
ipaddr_net = ipaddress.ip_network(ipaddr, strict=False)
right_dict = {'prefix': {'addr': str(ipaddr_net.network_address), 'len': int(ipaddr_net.prefixlen) } }
left_dict = {'payload': {'protocol': _family, 'field': 'saddr'} }
match_dict = {'op': '==', 'left': left_dict, 'right': right_dict }
expr_opt.append({'match': match_dict})
counter_dict = {'counter': {'family': _family, 'table': "filter", 'packets': 0, 'bytes': 0} }
expr_opt.append(counter_dict)
expr_opt.append({'drop': "null"})
rule_dict = {'family': _family, 'table': "filter", 'chain': self.chain_name, 'expr': expr_opt}
base_dict = {'insert': {'rule': rule_dict} }
json_command["nftables"].append(base_dict)
return json_command
def get_unban_ip_dict(self, ipaddr:str, _family: str):
json_command = self.get_base_dict()
# Command: 'nft list chain {s_family} filter MAILCOW'
_chain_opts = {'family': _family, 'table': 'filter', 'name': self.chain_name}
_list = {'list': {'chain': _chain_opts} }
command = self.get_base_dict()
command['nftables'].append(_list)
kernel_ruleset = self.nft_exec_dict(command)
rule_handle = None
if kernel_ruleset:
for _object in kernel_ruleset["nftables"]:
if not _object.get("rule"):
continue
rule = _object["rule"]["expr"][0]["match"]
if not "payload" in rule["left"]:
continue
left_opt = rule["left"]["payload"]
if not left_opt["protocol"] == _family:
continue
if not left_opt["field"] =="saddr":
continue
# ip currently banned
rule_right = rule["right"]
if isinstance(rule_right, dict):
current_rule_ip = rule_right["prefix"]["addr"] + '/' + str(rule_right["prefix"]["len"])
else:
current_rule_ip = rule_right
current_rule_net = ipaddress.ip_network(current_rule_ip)
# ip to ban
candidate_net = ipaddress.ip_network(ipaddr, strict=False)
if current_rule_net == candidate_net:
rule_handle = _object["rule"]["handle"]
break
if rule_handle is not None:
mailcow_rule = {'family': _family, 'table': 'filter', 'chain': self.chain_name, 'handle': rule_handle}
delete_rule = {'delete': {'rule': mailcow_rule} }
json_command["nftables"].append(delete_rule)
else:
return False
return json_command
def check_mailcow_chains(self, family: str, chain: str):
position = 0
rule_found = False
chain_name = self.nft_chain_names[family]['filter'][chain]
if not chain_name: return None
_chain_opts = {'family': family, 'table': 'filter', 'name': chain_name}
_list = {'list': {'chain': _chain_opts}}
command = self.get_base_dict()
command['nftables'].append(_list)
kernel_ruleset = self.nft_exec_dict(command)
if kernel_ruleset:
for _object in kernel_ruleset["nftables"]:
if not _object.get("rule"):
continue
rule = _object["rule"]
if rule.get("comment") and rule["comment"] == "mailcow":
rule_found = True
break
position+=1
return position if rule_found else False
def create_mailcow_isolation_rule(self, _interface:str, _dports:list, _allow:str = ""):
family = "ip"
table = "filter"
comment_filter_drop = "mailcow isolation"
comment_filter_allow = "mailcow isolation allow"
json_command = self.get_base_dict()
# Delete old mailcow isolation rules
handles = self.get_rules_handle(family, table, self.chain_name, comment_filter_drop)
for handle in handles:
self.delete_filter_rule(family, self.chain_name, handle)
handles = self.get_rules_handle(family, table, self.chain_name, comment_filter_allow)
for handle in handles:
self.delete_filter_rule(family, self.chain_name, handle)
# insert mailcow isolation rule
_match_dict_drop = [
{
"match": {
"op": "!=",
"left": {
"meta": {
"key": "iifname"
}
},
"right": _interface
}
},
{
"match": {
"op": "==",
"left": {
"meta": {
"key": "oifname"
}
},
"right": _interface
}
},
{
"match": {
"op": "==",
"left": {
"payload": {
"protocol": "tcp",
"field": "dport"
}
},
"right": {
"set": _dports
}
}
},
{
"counter": {
"packets": 0,
"bytes": 0
}
},
{
"drop": None
}
]
rule_drop = { "insert": { "rule": {
"family": family,
"table": table,
"chain": self.chain_name,
"comment": comment_filter_drop,
"expr": _match_dict_drop
}}}
json_command["nftables"].append(rule_drop)
# insert mailcow isolation allow rule
if _allow != "":
_match_dict_allow = [
{
"match": {
"op": "==",
"left": {
"payload": {
"protocol": "ip",
"field": "saddr"
}
},
"right": _allow
}
},
{
"match": {
"op": "!=",
"left": {
"meta": {
"key": "iifname"
}
},
"right": _interface
}
},
{
"match": {
"op": "==",
"left": {
"meta": {
"key": "oifname"
}
},
"right": _interface
}
},
{
"match": {
"op": "==",
"left": {
"payload": {
"protocol": "tcp",
"field": "dport"
}
},
"right": {
"set": _dports
}
}
},
{
"counter": {
"packets": 0,
"bytes": 0
}
},
{
"accept": None
}
]
rule_allow = { "insert": { "rule": {
"family": family,
"table": table,
"chain": self.chain_name,
"comment": comment_filter_allow,
"expr": _match_dict_allow
}}}
json_command["nftables"].append(rule_allow)
success = self.nft_exec_dict(json_command)
if success == False:
self.logger.logCrit(f"Error adding {self.chain_name} isolation")
return False
return True

View file

View file

@ -0,0 +1,23 @@
FROM alpine:3.20
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
ARG PIP_BREAK_SYSTEM_PACKAGES=1
WORKDIR /app
#RUN addgroup -S olefy && adduser -S olefy -G olefy \
RUN apk add --virtual .build-deps gcc musl-dev python3-dev libffi-dev openssl-dev cargo \
&& apk add --update --no-cache python3 py3-pip openssl tzdata libmagic \
&& pip3 install --upgrade pip \
&& pip3 install --upgrade asyncio python-magic \
&& pip3 install --upgrade https://github.com/decalage2/oletools/archive/master.zip \
&& apk del .build-deps
# && sed -i 's/template_injection_detected = True/template_injection_detected = False/g' /usr/lib/python3.9/site-packages/oletools/olevba.py
ADD olefy.py /app/
RUN chown -R nobody:nobody /app /tmp
USER nobody
CMD ["python3", "-u", "/app/olefy.py"]

220
data/Dockerfiles/olefy/olefy.py Executable file
View file

@ -0,0 +1,220 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Dennis Kalbhen <d.kalbhen@heinlein-support.de>
# Copyright (c) 2020, Carsten Rosenberg <c.rosenberg@heinlein-support.de>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###
#
# olefy is a little helper socket to use oletools with rspamd. (https://rspamd.com)
# Please find updates and issues here: https://github.com/HeinleinSupport/olefy
#
###
from subprocess import Popen, PIPE
import sys
import os
import logging
import asyncio
import time
import magic
import re
# merge variables from /etc/olefy.conf and the defaults
olefy_listen_addr_string = os.getenv('OLEFY_BINDADDRESS', '127.0.0.1,::1')
olefy_listen_port = int(os.getenv('OLEFY_BINDPORT', '10050'))
olefy_tmp_dir = os.getenv('OLEFY_TMPDIR', '/tmp')
olefy_python_path = os.getenv('OLEFY_PYTHON_PATH', '/usr/bin/python3')
olefy_olevba_path = os.getenv('OLEFY_OLEVBA_PATH', '/usr/local/bin/olevba3')
# 10:DEBUG, 20:INFO, 30:WARNING, 40:ERROR, 50:CRITICAL
olefy_loglvl = int(os.getenv('OLEFY_LOGLVL', 20))
olefy_min_length = int(os.getenv('OLEFY_MINLENGTH', 500))
olefy_del_tmp = int(os.getenv('OLEFY_DEL_TMP', 1))
olefy_del_tmp_failed = int(os.getenv('OLEFY_DEL_TMP_FAILED', 1))
# internal used variables
request_time = '0000000000.000000'
olefy_protocol = 'OLEFY'
olefy_ping = 'PING'
olefy_protocol_sep = '\n\n'
olefy_headers = {}
# init logging
logger = logging.getLogger('olefy')
logging.basicConfig(stream=sys.stdout, level=olefy_loglvl, format='olefy %(levelname)s %(funcName)s %(message)s')
logger.debug('olefy listen address string: {} (type {})'.format(olefy_listen_addr_string, type(olefy_listen_addr_string)))
if not olefy_listen_addr_string:
olefy_listen_addr = ""
else:
addr_re = re.compile('[\[" \]]')
olefy_listen_addr = addr_re.sub('', olefy_listen_addr_string.replace("'", "")).split(',')
# log runtime variables
logger.info('olefy listen address: {} (type: {})'.format(olefy_listen_addr, type(olefy_listen_addr)))
logger.info('olefy listen port: {}'.format(olefy_listen_port))
logger.info('olefy tmp dir: {}'.format(olefy_tmp_dir))
logger.info('olefy python path: {}'.format(olefy_python_path))
logger.info('olefy olvba path: {}'.format(olefy_olevba_path))
logger.info('olefy log level: {}'.format(olefy_loglvl))
logger.info('olefy min file length: {}'.format(olefy_min_length))
logger.info('olefy delete tmp file: {}'.format(olefy_del_tmp))
logger.info('olefy delete tmp file when failed: {}'.format(olefy_del_tmp_failed))
if not os.path.isfile(olefy_python_path):
logger.critical('python path not found: {}'.format(olefy_python_path))
exit(1)
if not os.path.isfile(olefy_olevba_path):
logger.critical('olevba path not found: {}'.format(olefy_olevba_path))
exit(1)
# olefy protocol function
def protocol_split( olefy_line ):
header_lines = olefy_line.split('\n')
for line in header_lines:
if line == 'OLEFY/1.0':
olefy_headers['olefy'] = line
elif line != '':
kv = line.split(': ')
if kv[0] != '' and kv[1] != '':
olefy_headers[kv[0]] = kv[1]
logger.debug('olefy_headers: {}'.format(olefy_headers))
# calling oletools
def oletools( stream, tmp_file_name, lid ):
if olefy_min_length > stream.__len__():
logger.error('{} {} bytes (Not Scanning! File smaller than {!r})'.format(lid, stream.__len__(), olefy_min_length))
out = b'[ { "error": "File too small" } ]'
else:
tmp_file = open(tmp_file_name, 'wb')
tmp_file.write(stream)
tmp_file.close()
file_magic = magic.Magic(mime=True, uncompress=True)
file_mime = file_magic.from_file(tmp_file_name)
logger.info('{} {} (libmagic output)'.format(lid, file_mime))
# do the olefy
cmd_tmp = Popen([olefy_python_path, olefy_olevba_path, '-a', '-j' , '-l', 'error', tmp_file_name], stdout=PIPE, stderr=PIPE)
out, err = cmd_tmp.communicate()
out = bytes(out.decode('utf-8', 'ignore').replace(' ', ' ').replace('\t', '').replace('\n', '').replace('XLMMacroDeobfuscator: pywin32 is not installed (only is required if you want to use MS Excel)', ''), encoding="utf-8")
failed = False
if out.__len__() < 30:
logger.error('{} olevba returned <30 chars - rc: {!r}, response: {!r}, error: {!r}'.format(lid,cmd_tmp.returncode,
out.decode('utf-8', 'ignore'), err.decode('utf-8', 'ignore')))
out = b'[ { "error": "Unhandled error - too short olevba response" } ]'
failed = True
elif err.__len__() > 10 and cmd_tmp.returncode == 9:
logger.error("{} olevba stderr >10 chars - rc: {!r}, response: {!r}".format(lid, cmd_tmp.returncode, err.decode("utf-8", "ignore")))
out = b'[ { "error": "Decrypt failed" } ]'
failed = True
elif err.__len__() > 10 and cmd_tmp.returncode > 9:
logger.error('{} olevba stderr >10 chars - rc: {!r}, response: {!r}'.format(lid, cmd_tmp.returncode, err.decode('utf-8', 'ignore')))
out = b'[ { "error": "Unhandled oletools error" } ]'
failed = True
elif cmd_tmp.returncode != 0:
logger.error('{} olevba exited with code {!r}; err: {!r}'.format(lid, cmd_tmp.returncode, err.decode('utf-8', 'ignore')))
failed = True
if failed and olefy_del_tmp_failed == 0:
logger.debug('{} {} FAILED: not deleting tmp file'.format(lid, tmp_file_name))
elif olefy_del_tmp == 1:
logger.debug('{} {} deleting tmp file'.format(lid, tmp_file_name))
os.remove(tmp_file_name)
logger.debug('{} response: {}'.format(lid, out.decode('utf-8', 'ignore')))
return out + b'\t\n\n\t'
# Asyncio data handling, default AIO-Functions
class AIO(asyncio.Protocol):
def __init__(self):
self.extra = bytearray()
def connection_made(self, transport):
global request_time
peer = transport.get_extra_info('peername')
logger.debug('{} new connection was made'.format(peer))
self.transport = transport
request_time = str(time.time())
def data_received(self, request, msgid=1):
peer = self.transport.get_extra_info('peername')
logger.debug('{} data received from new connection'.format(peer))
self.extra.extend(request)
def eof_received(self):
peer = self.transport.get_extra_info('peername')
olefy_protocol_err = False
proto_ck = self.extra[0:2000].decode('utf-8', 'ignore')
headers = proto_ck[0:proto_ck.find(olefy_protocol_sep)]
if olefy_protocol == headers[0:5]:
self.extra = bytearray(self.extra[len(headers)+2:len(self.extra)])
protocol_split(headers)
else:
olefy_protocol_err = True
if olefy_ping == headers[0:4]:
is_ping = True
else:
is_ping = False
rspamd_id = olefy_headers['Rspamd-ID'][:6] or ''
lid = 'Rspamd-ID' in olefy_headers and '<'+rspamd_id+'>'
tmp_file_name = olefy_tmp_dir+'/'+request_time+'.'+str(peer[1])+'.'+rspamd_id
logger.debug('{} {} choosen as tmp filename'.format(lid, tmp_file_name))
if not is_ping or olefy_loglvl == 10:
logger.info('{} {} bytes (stream size)'.format(lid, self.extra.__len__()))
if olefy_ping == headers[0:4]:
logger.debug('{} PING request'.format(peer))
out = b'PONG'
elif olefy_protocol_err == True or olefy_headers['olefy'] != 'OLEFY/1.0':
logger.error('{} Protocol ERROR: no OLEFY/1.0 found'.format(lid))
out = b'[ { "error": "Protocol error" } ]'
elif 'Method' in olefy_headers:
if olefy_headers['Method'] == 'oletools':
out = oletools(self.extra, tmp_file_name, lid)
else:
logger.error('Protocol ERROR: Method header not found')
out = b'[ { "error": "Protocol error: Method header not found" } ]'
self.transport.write(out)
if not is_ping or olefy_loglvl == 10:
logger.info('{} {} response send: {!r}'.format(lid, peer, out))
self.transport.close()
# start the listeners
loop = asyncio.get_event_loop()
# each client connection will create a new protocol instance
coro = loop.create_server(AIO, olefy_listen_addr, olefy_listen_port)
server = loop.run_until_complete(coro)
for sockets in server.sockets:
logger.info('serving on {}'.format(sockets.getsockname()))
# XXX serve requests until KeyboardInterrupt, not needed for production
try:
loop.run_forever()
except KeyboardInterrupt:
pass
# graceful shutdown/reload
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
logger.info('stopped serving')

View file

@ -0,0 +1,114 @@
FROM php:8.2-fpm-alpine3.18
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
# renovate: datasource=github-tags depName=krakjoe/apcu versioning=semver-coerced extractVersion=^v(?<version>.*)$
ARG APCU_PECL_VERSION=5.1.23
# renovate: datasource=github-tags depName=Imagick/imagick versioning=semver-coerced extractVersion=(?<version>.*)$
ARG IMAGICK_PECL_VERSION=3.7.0
# renovate: datasource=github-tags depName=php/pecl-mail-mailparse versioning=semver-coerced extractVersion=^v(?<version>.*)$
ARG MAILPARSE_PECL_VERSION=3.1.6
# renovate: datasource=github-tags depName=php-memcached-dev/php-memcached versioning=semver-coerced extractVersion=^v(?<version>.*)$
ARG MEMCACHED_PECL_VERSION=3.2.0
# renovate: datasource=github-tags depName=phpredis/phpredis versioning=semver-coerced extractVersion=(?<version>.*)$
ARG REDIS_PECL_VERSION=6.0.2
# renovate: datasource=github-tags depName=composer/composer versioning=semver-coerced extractVersion=(?<version>.*)$
ARG COMPOSER_VERSION=2.6.6
RUN apk add -U --no-cache autoconf \
aspell-dev \
aspell-libs \
bash \
c-client \
cyrus-sasl-dev \
freetype \
freetype-dev \
g++ \
git \
gettext \
gettext-dev \
gmp-dev \
gnupg \
icu-dev \
icu-libs \
imagemagick \
imagemagick-dev \
imap-dev \
jq \
libavif \
libavif-dev \
libjpeg-turbo \
libjpeg-turbo-dev \
libmemcached \
libmemcached-dev \
libpng \
libpng-dev \
libressl \
libressl-dev \
librsvg \
libtool \
libwebp-dev \
libxml2-dev \
libxpm \
libxpm-dev \
libzip \
libzip-dev \
linux-headers \
make \
mysql-client \
openldap-dev \
pcre-dev \
re2c \
redis \
samba-client \
zlib-dev \
tzdata \
&& pecl install APCu-${APCU_PECL_VERSION} \
&& pecl install imagick-${IMAGICK_PECL_VERSION} \
&& pecl install mailparse-${MAILPARSE_PECL_VERSION} \
&& pecl install memcached-${MEMCACHED_PECL_VERSION} \
&& pecl install redis-${REDIS_PECL_VERSION} \
&& docker-php-ext-enable apcu imagick memcached mailparse redis \
&& pecl clear-cache \
&& docker-php-ext-configure intl \
&& docker-php-ext-configure exif \
&& docker-php-ext-configure gd --with-freetype=/usr/include/ \
--with-jpeg=/usr/include/ \
--with-webp \
--with-xpm \
--with-avif \
&& docker-php-ext-install -j 4 exif gd gettext intl ldap opcache pcntl pdo pdo_mysql pspell soap sockets sysvsem zip bcmath gmp \
&& docker-php-ext-configure imap --with-imap --with-imap-ssl \
&& docker-php-ext-install -j 4 imap \
&& curl --silent --show-error https://getcomposer.org/installer | php -- --version=${COMPOSER_VERSION} \
&& mv composer.phar /usr/local/bin/composer \
&& chmod +x /usr/local/bin/composer \
&& apk del --purge autoconf \
aspell-dev \
cyrus-sasl-dev \
freetype-dev \
g++ \
gettext-dev \
icu-dev \
imagemagick-dev \
imap-dev \
libavif-dev \
libjpeg-turbo-dev \
libmemcached-dev \
libpng-dev \
libressl-dev \
libwebp-dev \
libxml2-dev \
libxpm-dev \
libzip-dev \
linux-headers \
make \
openldap-dev \
pcre-dev \
zlib-dev
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["php-fpm"]

View file

@ -0,0 +1,216 @@
#!/bin/bash
function array_by_comma { local IFS=","; echo "$*"; }
# Wait for containers
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for SQL..."
sleep 2
done
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
fi
until [[ $(${REDIS_CMDLINE} PING) == "PONG" ]]; do
echo "Waiting for Redis..."
sleep 2
done
# Check mysql_upgrade (master and slave)
CONTAINER_ID=
until [[ ! -z "${CONTAINER_ID}" ]] && [[ "${CONTAINER_ID}" =~ ^[[:alnum:]]*$ ]]; do
CONTAINER_ID=$(curl --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" 2> /dev/null | jq -rc "select( .name | tostring | contains(\"mysql-mailcow\")) | select( .project | tostring | contains(\"${COMPOSE_PROJECT_NAME,,}\")) | .id" 2> /dev/null)
echo "Could not get mysql-mailcow container id... trying again"
sleep 2
done
echo "MySQL @ ${CONTAINER_ID}"
SQL_LOOP_C=0
SQL_CHANGED=0
until [[ ${SQL_UPGRADE_STATUS} == 'success' ]]; do
if [ ${SQL_LOOP_C} -gt 4 ]; then
echo "Tried to upgrade MySQL and failed, giving up after ${SQL_LOOP_C} retries and starting container (oops, not good)"
break
fi
SQL_FULL_UPGRADE_RETURN=$(curl --silent --insecure -XPOST https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${CONTAINER_ID}/exec -d '{"cmd":"system", "task":"mysql_upgrade"}' --silent -H 'Content-type: application/json')
SQL_UPGRADE_STATUS=$(echo ${SQL_FULL_UPGRADE_RETURN} | jq -r .type)
SQL_LOOP_C=$((SQL_LOOP_C+1))
echo "SQL upgrade iteration #${SQL_LOOP_C}"
if [[ ${SQL_UPGRADE_STATUS} == 'warning' ]]; then
SQL_CHANGED=1
echo "MySQL applied an upgrade, debug output:"
echo ${SQL_FULL_UPGRADE_RETURN}
sleep 3
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for SQL to return, please wait"
sleep 2
done
continue
elif [[ ${SQL_UPGRADE_STATUS} == 'success' ]]; then
echo "MySQL is up-to-date - debug output:"
echo ${SQL_FULL_UPGRADE_RETURN}
else
echo "No valid reponse for mysql_upgrade was received, debug output:"
echo ${SQL_FULL_UPGRADE_RETURN}
fi
done
# doing post-installation stuff, if SQL was upgraded (master and slave)
if [ ${SQL_CHANGED} -eq 1 ]; then
POSTFIX=$(curl --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" 2> /dev/null | jq -rc "select( .name | tostring | contains(\"postfix-mailcow\")) | select( .project | tostring | contains(\"${COMPOSE_PROJECT_NAME,,}\")) | .id" 2> /dev/null)
if [[ -z "${POSTFIX}" ]] || ! [[ "${POSTFIX}" =~ ^[[:alnum:]]*$ ]]; then
echo "Could not determine Postfix container ID, skipping Postfix restart."
else
echo "Restarting Postfix"
curl -X POST --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${POSTFIX}/restart | jq -r '.msg'
echo "Sleeping 5 seconds..."
sleep 5
fi
fi
# Check mysql tz import (master and slave)
TZ_CHECK=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT CONVERT_TZ('2019-11-02 23:33:00','Europe/Berlin','UTC') AS time;" -BN 2> /dev/null)
if [[ -z ${TZ_CHECK} ]] || [[ "${TZ_CHECK}" == "NULL" ]]; then
SQL_FULL_TZINFO_IMPORT_RETURN=$(curl --silent --insecure -XPOST https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${CONTAINER_ID}/exec -d '{"cmd":"system", "task":"mysql_tzinfo_to_sql"}' --silent -H 'Content-type: application/json')
echo "MySQL mysql_tzinfo_to_sql - debug output:"
echo ${SQL_FULL_TZINFO_IMPORT_RETURN}
fi
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "We are master, preparing..."
# Set a default release format
if [[ -z $(${REDIS_CMDLINE} --raw GET Q_RELEASE_FORMAT) ]]; then
${REDIS_CMDLINE} --raw SET Q_RELEASE_FORMAT raw
fi
# Set max age of q items - if unset
if [[ -z $(${REDIS_CMDLINE} --raw GET Q_MAX_AGE) ]]; then
${REDIS_CMDLINE} --raw SET Q_MAX_AGE 365
fi
# Set default password policy - if unset
if [[ -z $(${REDIS_CMDLINE} --raw HGET PASSWD_POLICY length) ]]; then
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY length 6
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY chars 0
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY special_chars 0
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY lowerupper 0
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY numbers 0
fi
# Trigger db init
echo "Running DB init..."
php -c /usr/local/etc/php -f /web/inc/init_db.inc.php
# Recreating domain map
echo "Rebuilding domain map in Redis..."
declare -a DOMAIN_ARR
${REDIS_CMDLINE} DEL DOMAIN_MAP > /dev/null
while read line
do
DOMAIN_ARR+=("$line")
done < <(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT domain FROM domain" -Bs)
while read line
do
DOMAIN_ARR+=("$line")
done < <(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT alias_domain FROM alias_domain" -Bs)
if [[ ! -z ${DOMAIN_ARR} ]]; then
for domain in "${DOMAIN_ARR[@]}"; do
${REDIS_CMDLINE} HSET DOMAIN_MAP ${domain} 1 > /dev/null
done
fi
# Set API options if env vars are not empty
if [[ ${API_ALLOW_FROM} != "invalid" ]] && [[ ! -z ${API_ALLOW_FROM} ]]; then
IFS=',' read -r -a API_ALLOW_FROM_ARR <<< "${API_ALLOW_FROM}"
declare -a VALIDATED_API_ALLOW_FROM_ARR
REGEX_IP6='^([0-9a-fA-F]{0,4}:){1,7}[0-9a-fA-F]{0,4}(/([0-9]|[1-9][0-9]|1[0-1][0-9]|12[0-8]))?$'
REGEX_IP4='^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+(/([0-9]|[1-2][0-9]|3[0-2]))?$'
for IP in "${API_ALLOW_FROM_ARR[@]}"; do
if [[ ${IP} =~ ${REGEX_IP6} ]] || [[ ${IP} =~ ${REGEX_IP4} ]]; then
VALIDATED_API_ALLOW_FROM_ARR+=("${IP}")
fi
done
VALIDATED_IPS=$(array_by_comma ${VALIDATED_API_ALLOW_FROM_ARR[*]})
if [[ ! -z ${VALIDATED_IPS} ]]; then
if [[ ${API_KEY} != "invalid" ]] && [[ ! -z ${API_KEY} ]]; then
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
DELETE FROM api WHERE access = 'rw';
INSERT INTO api (api_key, active, allow_from, access) VALUES ("${API_KEY}", "1", "${VALIDATED_IPS}", "rw");
EOF
fi
if [[ ${API_KEY_READ_ONLY} != "invalid" ]] && [[ ! -z ${API_KEY_READ_ONLY} ]]; then
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
DELETE FROM api WHERE access = 'ro';
INSERT INTO api (api_key, active, allow_from, access) VALUES ("${API_KEY_READ_ONLY}", "1", "${VALIDATED_IPS}", "ro");
EOF
fi
fi
fi
# Create events (master only, STATUS for event on slave will be SLAVESIDE_DISABLED)
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
DROP EVENT IF EXISTS clean_spamalias;
DELIMITER //
CREATE EVENT clean_spamalias
ON SCHEDULE EVERY 1 DAY DO
BEGIN
DELETE FROM spamalias WHERE validity < UNIX_TIMESTAMP();
END;
//
DELIMITER ;
DROP EVENT IF EXISTS clean_oauth2;
DELIMITER //
CREATE EVENT clean_oauth2
ON SCHEDULE EVERY 1 DAY DO
BEGIN
DELETE FROM oauth_refresh_tokens WHERE expires < NOW();
DELETE FROM oauth_access_tokens WHERE expires < NOW();
DELETE FROM oauth_authorization_codes WHERE expires < NOW();
END;
//
DELIMITER ;
DROP EVENT IF EXISTS clean_sasl_log;
DELIMITER //
CREATE EVENT clean_sasl_log
ON SCHEDULE EVERY 1 DAY DO
BEGIN
DELETE sasl_log.* FROM sasl_log
LEFT JOIN (
SELECT username, service, MAX(datetime) AS lastdate
FROM sasl_log
GROUP BY username, service
) AS last ON sasl_log.username = last.username AND sasl_log.service = last.service
WHERE datetime < DATE_SUB(NOW(), INTERVAL 31 DAY) AND datetime < lastdate;
DELETE FROM sasl_log
WHERE username NOT IN (SELECT username FROM mailbox) AND
datetime < DATE_SUB(NOW(), INTERVAL 31 DAY);
END;
//
DELIMITER ;
EOF
fi
# Create dummy for custom overrides of mailcow style
[[ ! -f /web/css/build/0081-custom-mailcow.css ]] && echo '/* Autogenerated by mailcow */' > /web/css/build/0081-custom-mailcow.css
# Fix permissions for global filters
chown -R 82:82 /global_sieve/*
# Fix permissions on twig cache folder
chown -R 82:82 /web/templates/cache
# Clear cache
find /web/templates/cache/* -not -name '.gitkeep' -delete
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
exec "$@"

View file

@ -0,0 +1,63 @@
FROM debian:bookworm-slim
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
ARG DEBIAN_FRONTEND=noninteractive
ENV LC_ALL C
RUN dpkg-divert --local --rename --add /sbin/initctl \
&& ln -sf /bin/true /sbin/initctl \
&& dpkg-divert --local --rename --add /usr/bin/ischroot \
&& ln -sf /bin/true /usr/bin/ischroot
# Add groups and users before installing Postfix to not break compatibility
RUN groupadd -g 102 postfix \
&& groupadd -g 103 postdrop \
&& useradd -g postfix -u 101 -d /var/spool/postfix -s /usr/sbin/nologin postfix \
&& apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
dirmngr \
dnsutils \
gnupg \
libsasl2-modules \
mariadb-client \
perl \
postfix \
postfix-mysql \
postfix-pcre \
redis-tools \
sasl2-bin \
sudo \
supervisor \
syslog-ng \
syslog-ng-core \
syslog-ng-mod-redis \
tzdata \
&& rm -rf /var/lib/apt/lists/* \
&& touch /etc/default/locale \
&& printf '#!/bin/bash\n/usr/sbin/postconf -c /opt/postfix/conf "$@"' > /usr/local/sbin/postconf \
&& chmod +x /usr/local/sbin/postconf
COPY supervisord.conf /etc/supervisor/supervisord.conf
COPY syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY postfix.sh /opt/postfix.sh
COPY rspamd-pipe-ham /usr/local/bin/rspamd-pipe-ham
COPY rspamd-pipe-spam /usr/local/bin/rspamd-pipe-spam
COPY whitelist_forwardinghosts.sh /usr/local/bin/whitelist_forwardinghosts.sh
COPY stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /opt/postfix.sh \
/usr/local/bin/rspamd-pipe-ham \
/usr/local/bin/rspamd-pipe-spam \
/usr/local/bin/whitelist_forwardinghosts.sh \
/usr/local/sbin/stop-supervisor.sh
RUN rm -rf /tmp/* /var/tmp/*
EXPOSE 588
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

View file

@ -0,0 +1,15 @@
#!/bin/bash
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
cp /etc/syslog-ng/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng.conf
fi
exec "$@"

View file

@ -0,0 +1,519 @@
#!/bin/bash
trap "postfix stop" EXIT
[[ ! -d /opt/postfix/conf/sql/ ]] && mkdir -p /opt/postfix/conf/sql/
# Wait for MySQL to warm-up
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for database to come up..."
sleep 2
done
until dig +short mailcow.email > /dev/null; do
echo "Waiting for DNS..."
sleep 1
done
cat <<EOF > /etc/aliases
# Autogenerated by mailcow
null: /dev/null
watchdog: /dev/null
ham: "|/usr/local/bin/rspamd-pipe-ham"
spam: "|/usr/local/bin/rspamd-pipe-spam"
EOF
newaliases;
# create sni configuration
if [[ "${SKIP_LETS_ENCRYPT}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo -n "" > /opt/postfix/conf/sni.map
else
echo -n "" > /opt/postfix/conf/sni.map;
for cert_dir in /etc/ssl/mail/*/ ; do
if [[ ! -f ${cert_dir}domains ]] || [[ ! -f ${cert_dir}cert.pem ]] || [[ ! -f ${cert_dir}key.pem ]]; then
continue;
fi
IFS=" " read -r -a domains <<< "$(cat "${cert_dir}domains")"
for domain in "${domains[@]}"; do
echo -n "${domain} ${cert_dir}key.pem ${cert_dir}cert.pem" >> /opt/postfix/conf/sni.map;
echo "" >> /opt/postfix/conf/sni.map;
done
done
fi
postmap -F hash:/opt/postfix/conf/sni.map;
cat <<EOF > /opt/postfix/conf/sql/mysql_relay_ne.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT IF(EXISTS(SELECT address, domain FROM alias
WHERE address = '%s'
AND domain IN (
SELECT domain FROM domain
WHERE backupmx = '1'
AND relay_all_recipients = '1'
AND relay_unknown_only = '1')
), 'lmtp:inet:dovecot:24', NULL) AS 'transport'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_relay_recipient_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT DISTINCT
CASE WHEN '%d' IN (
SELECT domain FROM domain
WHERE relay_all_recipients=1
AND domain='%d'
AND backupmx=1
)
THEN '%s' ELSE (
SELECT goto FROM alias WHERE address='%s' AND active='1'
)
END AS result;
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_tls_policy_override_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT(policy, ' ', parameters) AS tls_policy FROM tls_policy_override WHERE active = '1' AND dest = '%s'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_tls_enforce_in_policy.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT IF(EXISTS(
SELECT 'TLS_ACTIVE' FROM alias
LEFT OUTER JOIN mailbox ON mailbox.username = alias.goto
WHERE (address='%s'
OR address IN (
SELECT CONCAT('%u', '@', target_domain) FROM alias_domain
WHERE alias_domain='%d'
)
) AND JSON_UNQUOTE(JSON_VALUE(attributes, '$.tls_enforce_in')) = '1' AND mailbox.active = '1'
), 'reject_plaintext_session', NULL) AS 'tls_enforce_in';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_sender_dependent_default_transport_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT GROUP_CONCAT(transport SEPARATOR '') AS transport_maps
FROM (
SELECT IF(EXISTS(SELECT 'smtp_type' FROM alias
LEFT OUTER JOIN mailbox ON mailbox.username = alias.goto
WHERE (address = '%s'
OR address IN (
SELECT CONCAT('%u', '@', target_domain) FROM alias_domain
WHERE alias_domain = '%d'
)
)
AND JSON_UNQUOTE(JSON_VALUE(attributes, '$.tls_enforce_out')) = '1'
AND mailbox.active = '1'
), 'smtp_enforced_tls:', 'smtp:') AS 'transport'
UNION ALL
SELECT COALESCE(
(SELECT hostname FROM relayhosts
LEFT OUTER JOIN mailbox ON JSON_UNQUOTE(JSON_VALUE(mailbox.attributes, '$.relayhost')) = relayhosts.id
WHERE relayhosts.active = '1'
AND (
mailbox.username IN (SELECT alias.goto from alias
JOIN mailbox ON mailbox.username = alias.goto
WHERE alias.active = '1'
AND alias.address = '%s'
AND alias.address NOT LIKE '@%%'
)
)
),
(SELECT hostname FROM relayhosts
LEFT OUTER JOIN domain ON domain.relayhost = relayhosts.id
WHERE relayhosts.active = '1'
AND (domain.domain = '%d'
OR domain.domain IN (
SELECT target_domain FROM alias_domain
WHERE alias_domain = '%d'
)
)
)
)
) AS transport_view;
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_transport_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT('smtp_via_transport_maps:', nexthop) AS transport FROM transports
WHERE active = '1'
AND destination = '%s';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_resource_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT 'null@localhost' FROM mailbox
WHERE kind REGEXP 'location|thing|group' AND username = '%s';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_sasl_passwd_maps_sender_dependent.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT_WS(':', username, password) AS auth_data FROM relayhosts
WHERE id IN (
SELECT COALESCE(
(SELECT id FROM relayhosts
LEFT OUTER JOIN domain ON domain.relayhost = relayhosts.id
WHERE relayhosts.active = '1'
AND (domain.domain = '%d'
OR domain.domain IN (
SELECT target_domain FROM alias_domain
WHERE alias_domain = '%d'
)
)
),
(SELECT id FROM relayhosts
LEFT OUTER JOIN mailbox ON JSON_UNQUOTE(JSON_VALUE(mailbox.attributes, '$.relayhost')) = relayhosts.id
WHERE relayhosts.active = '1'
AND (
mailbox.username IN (
SELECT alias.goto from alias
JOIN mailbox ON mailbox.username = alias.goto
WHERE alias.active = '1'
AND alias.address = '%s'
AND alias.address NOT LIKE '@%%'
)
)
)
)
)
AND active = '1'
AND username != '';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_sasl_passwd_maps_transport_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT_WS(':', username, password) AS auth_data FROM transports
WHERE nexthop = '%s'
AND active = '1'
AND username != ''
LIMIT 1;
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_alias_domain_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT username FROM mailbox, alias_domain
WHERE alias_domain.alias_domain = '%d'
AND mailbox.username = CONCAT('%u', '@', alias_domain.target_domain)
AND (mailbox.active = '1' OR mailbox.active = '2')
AND alias_domain.active='1'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_alias_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT goto FROM alias
WHERE address='%s'
AND (active='1' OR active='2');
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_recipient_bcc_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT bcc_dest FROM bcc_maps
WHERE local_dest='%s'
AND type='rcpt'
AND active='1';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_sender_bcc_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT bcc_dest FROM bcc_maps
WHERE local_dest='%s'
AND type='sender'
AND active='1';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_recipient_canonical_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT new_dest FROM recipient_maps
WHERE old_dest='%s'
AND active='1';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_domains_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT alias_domain from alias_domain WHERE alias_domain='%s' AND active='1'
UNION
SELECT domain FROM domain
WHERE domain='%s'
AND active = '1'
AND backupmx = '0'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_mailbox_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT(JSON_UNQUOTE(JSON_VALUE(attributes, '$.mailbox_format')), mailbox_path_prefix, '%d/%u/') FROM mailbox WHERE username='%s' AND (active = '1' OR active = '2')
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_relay_domain_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT domain FROM domain WHERE domain='%s' AND backupmx = '1' AND active = '1'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_sender_acl.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
# First select queries domain and alias_domain to determine if domains are active.
query = SELECT goto FROM alias
WHERE id IN (
SELECT COALESCE (
(
SELECT id FROM alias
WHERE address='%s'
AND (active='1' OR active='2')
), (
SELECT id FROM alias
WHERE address='@%d'
AND (active='1' OR active='2')
)
)
)
AND active='1'
AND (domain IN
(SELECT domain FROM domain
WHERE domain='%d'
AND active='1')
OR domain in (
SELECT alias_domain FROM alias_domain
WHERE alias_domain='%d'
AND active='1'
)
)
UNION
SELECT logged_in_as FROM sender_acl
WHERE send_as='@%d'
OR send_as='%s'
OR send_as='*'
OR send_as IN (
SELECT CONCAT('@',target_domain) FROM alias_domain
WHERE alias_domain = '%d')
OR send_as IN (
SELECT CONCAT('%u','@',target_domain) FROM alias_domain
WHERE alias_domain = '%d')
AND logged_in_as NOT IN (
SELECT goto FROM alias
WHERE address='%s')
UNION
SELECT username FROM mailbox, alias_domain
WHERE alias_domain.alias_domain = '%d'
AND mailbox.username = CONCAT('%u','@',alias_domain.target_domain)
AND (mailbox.active = '1' OR mailbox.active ='2')
AND alias_domain.active='1';
EOF
# MX based routing
cat <<EOF > /opt/postfix/conf/sql/mysql_mbr_access_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT('FILTER smtp_via_transport_maps:', nexthop) as transport FROM transports
WHERE '%s' REGEXP destination
AND active='1'
AND is_mx_based='1';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_spamalias_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT goto FROM spamalias
WHERE address='%s'
AND validity >= UNIX_TIMESTAMP()
EOF
if [ ! -f /opt/postfix/conf/dns_blocklists.cf ]; then
cat <<EOF > /opt/postfix/conf/dns_blocklists.cf
# This file can be edited.
# Delete this file and restart postfix container to revert any changes.
postscreen_dnsbl_sites = wl.mailspike.net=127.0.0.[18;19;20]*-2
hostkarma.junkemailfilter.com=127.0.0.1*-2
list.dnswl.org=127.0.[0..255].0*-2
list.dnswl.org=127.0.[0..255].1*-4
list.dnswl.org=127.0.[0..255].2*-6
list.dnswl.org=127.0.[0..255].3*-8
ix.dnsbl.manitu.net*2
bl.spamcop.net*2
bl.suomispam.net*2
hostkarma.junkemailfilter.com=127.0.0.2*3
hostkarma.junkemailfilter.com=127.0.0.4*2
hostkarma.junkemailfilter.com=127.0.1.2*1
backscatter.spameatingmonkey.net*2
bl.ipv6.spameatingmonkey.net*2
bl.spameatingmonkey.net*2
b.barracudacentral.org=127.0.0.2*7
bl.mailspike.net=127.0.0.2*5
bl.mailspike.net=127.0.0.[10;11;12]*4
EOF
fi
DNSBL_CONFIG=$(grep -v '^#' /opt/postfix/conf/dns_blocklists.cf | grep '\S')
if [ ! -z "$DNSBL_CONFIG" ]; then
echo -e "\e[33mChecking if ASN for your IP is listed for Spamhaus Bad ASN List...\e[0m"
if [ -n "$SPAMHAUS_DQS_KEY" ]; then
echo -e "\e[32mDetected SPAMHAUS_DQS_KEY variable from mailcow.conf...\e[0m"
echo -e "\e[33mUsing DQS Blocklists from Spamhaus!\e[0m"
SPAMHAUS_DNSBL_CONFIG=$(cat <<EOF
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net=127.0.0.[4..7]*6
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net=127.0.0.[10;11]*8
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net=127.0.0.3*4
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net=127.0.0.2*3
postscreen_dnsbl_reply_map = texthash:/opt/postfix/conf/dnsbl_reply.map
EOF
cat <<EOF > /opt/postfix/conf/dnsbl_reply.map
# Autogenerated by mailcow, using Spamhaus DQS reply domains
${SPAMHAUS_DQS_KEY}.sbl.dq.spamhaus.net sbl.spamhaus.org
${SPAMHAUS_DQS_KEY}.xbl.dq.spamhaus.net xbl.spamhaus.org
${SPAMHAUS_DQS_KEY}.pbl.dq.spamhaus.net pbl.spamhaus.org
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net zen.spamhaus.org
${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net dbl.spamhaus.org
${SPAMHAUS_DQS_KEY}.zrd.dq.spamhaus.net zrd.spamhaus.org
EOF
)
else
if [ -f "/opt/postfix/conf/dnsbl_reply.map" ]; then
rm /opt/postfix/conf/dnsbl_reply.map
fi
response=$(curl --connect-timeout 15 --max-time 30 -s -o /dev/null -w "%{http_code}" "https://asn-check.mailcow.email")
if [ "$response" -eq 503 ]; then
echo -e "\e[31mThe AS of your IP is listed as a banned AS from Spamhaus!\e[0m"
echo -e "\e[33mNo SPAMHAUS_DQS_KEY found... Skipping Spamhaus blocklists entirely!\e[0m"
SPAMHAUS_DNSBL_CONFIG=""
elif [ "$response" -eq 200 ]; then
echo -e "\e[32mThe AS of your IP is NOT listed as a banned AS from Spamhaus!\e[0m"
echo -e "\e[33mUsing the open Spamhaus blocklists.\e[0m"
SPAMHAUS_DNSBL_CONFIG=$(cat <<EOF
zen.spamhaus.org=127.0.0.[10;11]*8
zen.spamhaus.org=127.0.0.[4..7]*6
zen.spamhaus.org=127.0.0.3*4
zen.spamhaus.org=127.0.0.2*3
EOF
)
else
echo -e "\e[31mWe couldn't determine your AS... (maybe DNS/Network issue?) Response Code: $response\e[0m"
echo -e "\e[33mDeactivating Spamhaus DNS Blocklists to be on the safe site!\e[0m"
SPAMHAUS_DNSBL_CONFIG=""
fi
fi
fi
# Reset main.cf
sed -i '/Overrides/q' /opt/postfix/conf/main.cf
echo >> /opt/postfix/conf/main.cf
# Append postscreen dnsbl sites to main.cf
if [ ! -z "$DNSBL_CONFIG" ]; then
echo -e "${DNSBL_CONFIG}\n${SPAMHAUS_DNSBL_CONFIG}" >> /opt/postfix/conf/main.cf
fi
# Append user overrides
echo -e "\n# User Overrides" >> /opt/postfix/conf/main.cf
touch /opt/postfix/conf/extra.cf
sed -i '/\$myhostname/! { /myhostname/d }' /opt/postfix/conf/extra.cf
echo -e "myhostname = ${MAILCOW_HOSTNAME}\n$(cat /opt/postfix/conf/extra.cf)" > /opt/postfix/conf/extra.cf
cat /opt/postfix/conf/extra.cf >> /opt/postfix/conf/main.cf
if [ ! -f /opt/postfix/conf/custom_transport.pcre ]; then
echo "Creating dummy custom_transport.pcre"
touch /opt/postfix/conf/custom_transport.pcre
fi
if [[ ! -f /opt/postfix/conf/custom_postscreen_whitelist.cidr ]]; then
echo "Creating dummy custom_postscreen_whitelist.cidr"
cat <<EOF > /opt/postfix/conf/custom_postscreen_whitelist.cidr
# Autogenerated by mailcow
# Rules are evaluated in the order as specified.
# Blacklist 192.168.* except 192.168.0.1.
# 192.168.0.1 permit
# 192.168.0.0/16 reject
EOF
fi
# Fix Postfix permissions
chown -R root:postfix /opt/postfix/conf/sql/ /opt/postfix/conf/custom_transport.pcre
chmod 640 /opt/postfix/conf/sql/*.cf /opt/postfix/conf/custom_transport.pcre
chgrp -R postdrop /var/spool/postfix/public
chgrp -R postdrop /var/spool/postfix/maildrop
postfix set-permissions
# Check Postfix configuration
postconf -c /opt/postfix/conf > /dev/null
if [[ $? != 0 ]]; then
echo "Postfix configuration error, refusing to start."
exit 1
else
postfix -c /opt/postfix/conf start
sleep 126144000
fi

View file

@ -0,0 +1,9 @@
#!/bin/bash
FILE=/tmp/mail$$
cat > $FILE
trap "/bin/rm -f $FILE" 0 1 2 3 13 15
cat ${FILE} | /usr/bin/curl -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd/learnham
cat ${FILE} | /usr/bin/curl -H "Flag: 13" -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd/fuzzyadd
exit 0

View file

@ -0,0 +1,9 @@
#!/bin/bash
FILE=/tmp/mail$$
cat > $FILE
trap "/bin/rm -f $FILE" 0 1 2 3 13 15
cat ${FILE} | /usr/bin/curl -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd/learnspam
cat ${FILE} | /usr/bin/curl -H "Flag: 11" -s --data-binary @- --unix-socket /var/lib/rspamd/rspamd.sock http://rspamd/fuzzyadd
exit 0

View file

@ -0,0 +1,8 @@
#!/bin/bash
printf "READY\n";
while read line; do
echo "Processing Event: $line" >&2;
kill -3 $(cat "/var/run/supervisord.pid")
done < /dev/stdin

View file

@ -0,0 +1,24 @@
[supervisord]
pidfile=/var/run/supervisord.pid
nodaemon=true
user=root
[program:syslog-ng]
command=/usr/sbin/syslog-ng --foreground --no-caps
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
[program:postfix]
command=/opt/postfix.sh
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=true
[eventlistener:processes]
command=/usr/local/sbin/stop-supervisor.sh
events=PROCESS_STATE_STOPPED, PROCESS_STATE_EXITED, PROCESS_STATE_FATAL

View file

@ -0,0 +1,53 @@
@version: 3.38
@include "scl.conf"
options {
chain_hostnames(off);
flush_lines(0);
use_dns(no);
dns_cache(no);
use_fqdn(no);
owner("root"); group("adm"); perm(0640);
stats_freq(0);
bad_hostname("^gconfd$");
};
source s_src {
unix-stream("/dev/log");
internal();
};
destination d_stdout { pipe("/dev/stdout"); };
destination d_redis_ui_log {
redis(
host("`REDIS_SLAVEOF_IP`")
persist-name("redis1")
port(`REDIS_SLAVEOF_PORT`)
command("LPUSH" "POSTFIX_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
destination d_redis_f2b_channel {
redis(
host("`REDIS_SLAVEOF_IP`")
persist-name("redis2")
port(`REDIS_SLAVEOF_PORT`)
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};
filter f_mail { facility(mail); };
# start
# overriding warnings are still displayed when the entrypoint runs its initial check
# warnings logged by postfix-mailcow to syslog are hidden to reduce repeating msgs
# Some other warnings are ignored
filter f_ignore {
not match("overriding earlier entry" value("MESSAGE"));
not match("TLS SNI from checks.mailcow.email" value("MESSAGE"));
not match("no SASL support" value("MESSAGE"));
not facility (local0, local1, local2, local3, local4, local5, local6, local7);
};
# end
log {
source(s_src);
filter(f_ignore);
destination(d_stdout);
filter(f_mail);
destination(d_redis_ui_log);
destination(d_redis_f2b_channel);
};

View file

@ -0,0 +1,53 @@
@version: 3.38
@include "scl.conf"
options {
chain_hostnames(off);
flush_lines(0);
use_dns(no);
dns_cache(no);
use_fqdn(no);
owner("root"); group("adm"); perm(0640);
stats_freq(0);
bad_hostname("^gconfd$");
};
source s_src {
unix-stream("/dev/log");
internal();
};
destination d_stdout { pipe("/dev/stdout"); };
destination d_redis_ui_log {
redis(
host("redis-mailcow")
persist-name("redis1")
port(6379)
command("LPUSH" "POSTFIX_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
destination d_redis_f2b_channel {
redis(
host("redis-mailcow")
persist-name("redis2")
port(6379)
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};
filter f_mail { facility(mail); };
# start
# overriding warnings are still displayed when the entrypoint runs its initial check
# warnings logged by postfix-mailcow to syslog are hidden to reduce repeating msgs
# Some other warnings are ignored
filter f_ignore {
not match("overriding earlier entry" value("MESSAGE"));
not match("TLS SNI from checks.mailcow.email" value("MESSAGE"));
not match("no SASL support" value("MESSAGE"));
not facility (local0, local1, local2, local3, local4, local5, local6, local7);
};
# end
log {
source(s_src);
filter(f_ignore);
destination(d_stdout);
filter(f_mail);
destination(d_redis_ui_log);
destination(d_redis_f2b_channel);
};

View file

@ -0,0 +1,12 @@
#!/bin/bash
while read QUERY; do
QUERY=($QUERY)
if [ "${QUERY[0]}" != "get" ]; then
echo "500 dunno"
continue
fi
result=$(curl -s http://nginx:8081/forwardinghosts.php?host=${QUERY[1]})
logger -t whitelist_forwardinghosts -p mail.info "Look up ${QUERY[1]} on whitelist, result $result"
echo ${result}
done

View file

@ -0,0 +1,40 @@
FROM debian:bookworm-slim
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
ARG DEBIAN_FRONTEND=noninteractive
ARG RSPAMD_VER=rspamd_3.9.1-1~82f43560f
ARG CODENAME=bookworm
ENV LC_ALL=C
RUN apt-get update && apt-get install -y \
tzdata \
ca-certificates \
gnupg2 \
apt-transport-https \
dnsutils \
netcat-traditional \
wget \
redis-tools \
procps \
nano \
lua-cjson \
&& arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) \
&& wget -P /tmp https://rspamd.com/apt-stable/pool/main/r/rspamd/${RSPAMD_VER}~${CODENAME}_${arch}.deb\
&& apt install -y /tmp/${RSPAMD_VER}~${CODENAME}_${arch}.deb \
&& rm -rf /var/lib/apt/lists/* /tmp/*\
&& apt-get autoremove --purge \
&& apt-get clean \
&& mkdir -p /run/rspamd \
&& chown _rspamd:_rspamd /run/rspamd \
&& echo 'alias ll="ls -la --color"' >> ~/.bashrc \
&& sed -i 's/#analysis_keyword_table > 0/analysis_cat_table.macro_exist == "M"/g' /usr/share/rspamd/lualib/lua_scanners/oletools.lua
COPY settings.conf /etc/rspamd/settings.conf
COPY set_worker_password.sh /set_worker_password.sh
COPY docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
STOPSIGNAL SIGTERM
CMD ["/usr/bin/rspamd", "-f", "-u", "_rspamd", "-g", "_rspamd"]

View file

@ -0,0 +1,313 @@
#!/bin/bash
until nc phpfpm 9001 -z; do
echo "Waiting for PHP on port 9001..."
sleep 3
done
until nc phpfpm 9002 -z; do
echo "Waiting for PHP on port 9002..."
sleep 3
done
mkdir -p /etc/rspamd/plugins.d \
/etc/rspamd/custom
touch /etc/rspamd/rspamd.conf.local \
/etc/rspamd/rspamd.conf.override
chmod 755 /var/lib/rspamd
[[ ! -f /etc/rspamd/override.d/worker-controller-password.inc ]] && echo '# Autogenerated by mailcow' > /etc/rspamd/override.d/worker-controller-password.inc
echo ${IPV4_NETWORK}.0/24 > /etc/rspamd/custom/mailcow_networks.map
echo ${IPV6_NETWORK} >> /etc/rspamd/custom/mailcow_networks.map
DOVECOT_V4=
DOVECOT_V6=
until [[ ! -z ${DOVECOT_V4} ]]; do
DOVECOT_V4=$(dig a dovecot +short)
DOVECOT_V6=$(dig aaaa dovecot +short)
[[ ! -z ${DOVECOT_V4} ]] && break;
echo "Waiting for Dovecot..."
sleep 3
done
echo ${DOVECOT_V4}/32 > /etc/rspamd/custom/dovecot_trusted.map
if [[ ! -z ${DOVECOT_V6} ]]; then
echo ${DOVECOT_V6}/128 >> /etc/rspamd/custom/dovecot_trusted.map
fi
RSPAMD_V4=
RSPAMD_V6=
until [[ ! -z ${RSPAMD_V4} ]]; do
RSPAMD_V4=$(dig a rspamd +short)
RSPAMD_V6=$(dig aaaa rspamd +short)
[[ ! -z ${RSPAMD_V4} ]] && break;
echo "Waiting for Rspamd..."
sleep 3
done
echo ${RSPAMD_V4}/32 > /etc/rspamd/custom/rspamd_trusted.map
if [[ ! -z ${RSPAMD_V6} ]]; then
echo ${RSPAMD_V6}/128 >> /etc/rspamd/custom/rspamd_trusted.map
fi
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
cat <<EOF > /etc/rspamd/local.d/redis.conf
read_servers = "redis:6379";
write_servers = "${REDIS_SLAVEOF_IP}:${REDIS_SLAVEOF_PORT}";
timeout = 10;
EOF
until [[ $(redis-cli -h redis-mailcow PING) == "PONG" ]]; do
echo "Waiting for Redis @redis-mailcow..."
sleep 2
done
until [[ $(redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT} PING) == "PONG" ]]; do
echo "Waiting for Redis @${REDIS_SLAVEOF_IP}..."
sleep 2
done
redis-cli -h redis-mailcow SLAVEOF ${REDIS_SLAVEOF_IP} ${REDIS_SLAVEOF_PORT}
else
cat <<EOF > /etc/rspamd/local.d/redis.conf
servers = "redis:6379";
timeout = 10;
EOF
until [[ $(redis-cli -h redis-mailcow PING) == "PONG" ]]; do
echo "Waiting for Redis slave..."
sleep 2
done
redis-cli -h redis-mailcow SLAVEOF NO ONE
fi
# Provide additional lua modules
ln -s /usr/lib/$(uname -m)-linux-gnu/liblua5.1-cjson.so.0.0.0 /usr/lib/rspamd/cjson.so
chown -R _rspamd:_rspamd /var/lib/rspamd \
/etc/rspamd/local.d \
/etc/rspamd/override.d \
/etc/rspamd/rspamd.conf.local \
/etc/rspamd/rspamd.conf.override \
/etc/rspamd/plugins.d
# Fix missing default global maps, if any
# These exists in mailcow UI and should not be removed
touch /etc/rspamd/custom/global_mime_from_blacklist.map \
/etc/rspamd/custom/global_rcpt_blacklist.map \
/etc/rspamd/custom/global_smtp_from_blacklist.map \
/etc/rspamd/custom/global_mime_from_whitelist.map \
/etc/rspamd/custom/global_rcpt_whitelist.map \
/etc/rspamd/custom/global_smtp_from_whitelist.map \
/etc/rspamd/custom/bad_languages.map \
/etc/rspamd/custom/sa-rules \
/etc/rspamd/custom/dovecot_trusted.map \
/etc/rspamd/custom/rspamd_trusted.map \
/etc/rspamd/custom/mailcow_networks.map \
/etc/rspamd/custom/ip_wl.map \
/etc/rspamd/custom/fishy_tlds.map \
/etc/rspamd/custom/bad_words.map \
/etc/rspamd/custom/bad_asn.map \
/etc/rspamd/custom/bad_words_de.map \
/etc/rspamd/custom/bulk_header.map \
/etc/rspamd/custom/bad_header.map
# www-data (82) group needs to write to these files
chown _rspamd:_rspamd /etc/rspamd/custom/
chmod 0755 /etc/rspamd/custom/.
chown -R 82:82 /etc/rspamd/custom/*
chmod 644 -R /etc/rspamd/custom/*
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
# If DQS KEY is set in mailcow.conf add Spamhaus DQS RBLs
if [[ ! -z ${SPAMHAUS_DQS_KEY} ]]; then
cat <<EOF > /etc/rspamd/custom/dqs-rbl.conf
# Autogenerated by mailcow. DO NOT TOUCH!
spamhaus {
rbl = "${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net";
from = false;
}
spamhaus_from {
from = true;
received = false;
rbl = "${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net";
returncodes {
SPAMHAUS_ZEN = [ "127.0.0.2", "127.0.0.3", "127.0.0.4", "127.0.0.5", "127.0.0.6", "127.0.0.7", "127.0.0.9", "127.0.0.10", "127.0.0.11" ];
}
}
spamhaus_authbl_received {
# Check if the sender client is listed in AuthBL (AuthBL is *not* part of ZEN)
rbl = "${SPAMHAUS_DQS_KEY}.authbl.dq.spamhaus.net";
from = false;
received = true;
ipv6 = true;
returncodes {
SH_AUTHBL_RECEIVED = "127.0.0.20"
}
}
spamhaus_dbl {
# Add checks on the HELO string
rbl = "${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net";
helo = true;
rdns = true;
dkim = true;
disable_monitoring = true;
returncodes {
RBL_DBL_SPAM = "127.0.1.2";
RBL_DBL_PHISH = "127.0.1.4";
RBL_DBL_MALWARE = "127.0.1.5";
RBL_DBL_BOTNET = "127.0.1.6";
RBL_DBL_ABUSED_SPAM = "127.0.1.102";
RBL_DBL_ABUSED_PHISH = "127.0.1.104";
RBL_DBL_ABUSED_MALWARE = "127.0.1.105";
RBL_DBL_ABUSED_BOTNET = "127.0.1.106";
RBL_DBL_DONT_QUERY_IPS = "127.0.1.255";
}
}
spamhaus_dbl_fullurls {
ignore_defaults = true;
no_ip = true;
rbl = "${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net";
selector = 'urls:get_host'
disable_monitoring = true;
returncodes {
DBLABUSED_SPAM_FULLURLS = "127.0.1.102";
DBLABUSED_PHISH_FULLURLS = "127.0.1.104";
DBLABUSED_MALWARE_FULLURLS = "127.0.1.105";
DBLABUSED_BOTNET_FULLURLS = "127.0.1.106";
}
}
spamhaus_zrd {
# Add checks on the HELO string also for DQS
rbl = "${SPAMHAUS_DQS_KEY}.zrd.dq.spamhaus.net";
helo = true;
rdns = true;
dkim = true;
disable_monitoring = true;
returncodes {
RBL_ZRD_VERY_FRESH_DOMAIN = ["127.0.2.2", "127.0.2.3", "127.0.2.4"];
RBL_ZRD_FRESH_DOMAIN = [
"127.0.2.5", "127.0.2.6", "127.0.2.7", "127.0.2.8", "127.0.2.9", "127.0.2.10", "127.0.2.11", "127.0.2.12", "127.0.2.13", "127.0.2.14", "127.0.2.15", "127.0.2.16", "127.0.2.17", "127.0.2.18", "127.0.2.19", "127.0.2.20", "127.0.2.21", "127.0.2.22", "127.0.2.23", "127.0.2.24"
];
RBL_ZRD_DONT_QUERY_IPS = "127.0.2.255";
}
}
"SPAMHAUS_ZEN_URIBL" {
enabled = true;
rbl = "${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net";
resolve_ip = true;
checks = ['urls'];
replyto = true;
emails = true;
ipv4 = true;
ipv6 = true;
emails_domainonly = true;
returncodes {
URIBL_SBL = "127.0.0.2";
URIBL_SBL_CSS = "127.0.0.3";
URIBL_XBL = ["127.0.0.4", "127.0.0.5", "127.0.0.6", "127.0.0.7"];
URIBL_PBL = ["127.0.0.10", "127.0.0.11"];
URIBL_DROP = "127.0.0.9";
}
}
SH_EMAIL_DBL {
ignore_defaults = true;
replyto = true;
emails_domainonly = true;
disable_monitoring = true;
rbl = "${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net";
returncodes = {
SH_EMAIL_DBL = [
"127.0.1.2",
"127.0.1.4",
"127.0.1.5",
"127.0.1.6"
];
SH_EMAIL_DBL_ABUSED = [
"127.0.1.102",
"127.0.1.104",
"127.0.1.105",
"127.0.1.106"
];
SH_EMAIL_DBL_DONT_QUERY_IPS = [ "127.0.1.255" ];
}
}
SH_EMAIL_ZRD {
ignore_defaults = true;
replyto = true;
emails_domainonly = true;
disable_monitoring = true;
rbl = "${SPAMHAUS_DQS_KEY}.zrd.dq.spamhaus.net";
returncodes = {
SH_EMAIL_ZRD_VERY_FRESH_DOMAIN = ["127.0.2.2", "127.0.2.3", "127.0.2.4"];
SH_EMAIL_ZRD_FRESH_DOMAIN = [
"127.0.2.5", "127.0.2.6", "127.0.2.7", "127.0.2.8", "127.0.2.9", "127.0.2.10", "127.0.2.11", "127.0.2.12", "127.0.2.13", "127.0.2.14", "127.0.2.15", "127.0.2.16", "127.0.2.17", "127.0.2.18", "127.0.2.19", "127.0.2.20", "127.0.2.21", "127.0.2.22", "127.0.2.23", "127.0.2.24"
];
SH_EMAIL_ZRD_DONT_QUERY_IPS = [ "127.0.2.255" ];
}
}
"DBL" {
# override the defaults for DBL defined in modules.d/rbl.conf
rbl = "${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net";
disable_monitoring = true;
}
"ZRD" {
ignore_defaults = true;
rbl = "${SPAMHAUS_DQS_KEY}.zrd.dq.spamhaus.net";
no_ip = true;
dkim = true;
emails = true;
emails_domainonly = true;
urls = true;
returncodes = {
ZRD_VERY_FRESH_DOMAIN = ["127.0.2.2", "127.0.2.3", "127.0.2.4"];
ZRD_FRESH_DOMAIN = ["127.0.2.5", "127.0.2.6", "127.0.2.7", "127.0.2.8", "127.0.2.9", "127.0.2.10", "127.0.2.11", "127.0.2.12", "127.0.2.13", "127.0.2.14", "127.0.2.15", "127.0.2.16", "127.0.2.17", "127.0.2.18", "127.0.2.19", "127.0.2.20", "127.0.2.21", "127.0.2.22", "127.0.2.23", "127.0.2.24"];
}
}
spamhaus_sbl_url {
ignore_defaults = true
rbl = "${SPAMHAUS_DQS_KEY}.sbl.dq.spamhaus.net";
checks = ['urls'];
disable_monitoring = true;
returncodes {
SPAMHAUS_SBL_URL = "127.0.0.2";
}
}
SH_HBL_EMAIL {
ignore_defaults = true;
rbl = "_email.${SPAMHAUS_DQS_KEY}.hbl.dq.spamhaus.net";
emails_domainonly = false;
selector = "from('smtp').lower;from('mime').lower";
ignore_whitelist = true;
checks = ['emails', 'replyto'];
hash = "sha1";
returncodes = {
SH_HBL_EMAIL = [
"127.0.3.2"
];
}
}
spamhaus_dqs_hbl {
symbol = "HBL_FILE_UNKNOWN";
rbl = "_file.${SPAMHAUS_DQS_KEY}.hbl.dq.spamhaus.net.";
selector = "attachments('rbase32', 'sha256')";
ignore_whitelist = true;
ignore_defaults = true;
returncodes {
SH_HBL_FILE_MALICIOUS = "127.0.3.10";
SH_HBL_FILE_SUSPICIOUS = "127.0.3.15";
}
}
EOF
else
rm -rf /etc/rspamd/custom/dqs-rbl.conf
fi
exec "$@"

View file

@ -0,0 +1,443 @@
local fun = require "fun"
local rspamd_logger = require "rspamd_logger"
local util = require "rspamd_util"
local lua_util = require "lua_util"
local rspamd_regexp = require "rspamd_regexp"
local ucl = require "ucl"
local complicated = {}
local rules = {}
local scores = {}
local function words_to_re(words, start)
return table.concat(fun.totable(fun.drop_n(start, words)), " ");
end
local function split(str, delim)
local result = {}
if not delim then
delim = '[^%s]+'
end
for token in string.gmatch(str, delim) do
table.insert(result, token)
end
return result
end
local function handle_header_def(hline, cur_rule)
--Now check for modifiers inside header's name
local hdrs = split(hline, '[^|]+')
local hdr_params = {}
local cur_param = {}
-- Check if an re is an ordinary re
local ordinary = true
for _,h in ipairs(hdrs) do
if h == 'ALL' or h == 'ALL:raw' then
ordinary = false
else
local args = split(h, '[^:]+')
cur_param['strong'] = false
cur_param['raw'] = false
cur_param['header'] = args[1]
if args[2] then
-- We have some ops that are required for the header, so it's not ordinary
ordinary = false
end
fun.each(function(func)
if func == 'addr' then
cur_param['function'] = function(str)
local addr_parsed = util.parse_addr(str)
local ret = {}
if addr_parsed then
for _,elt in ipairs(addr_parsed) do
if elt['addr'] then
table.insert(ret, elt['addr'])
end
end
end
return ret
end
elseif func == 'name' then
cur_param['function'] = function(str)
local addr_parsed = util.parse_addr(str)
local ret = {}
if addr_parsed then
for _,elt in ipairs(addr_parsed) do
if elt['name'] then
table.insert(ret, elt['name'])
end
end
end
return ret
end
elseif func == 'raw' then
cur_param['raw'] = true
elseif func == 'case' then
cur_param['strong'] = true
else
rspamd_logger.warnx(rspamd_config, 'Function %1 is not supported in %2',
func, cur_rule['symbol'])
end
end, fun.tail(args))
-- Some header rules require splitting to check of multiple headers
if cur_param['header'] == 'MESSAGEID' then
-- Special case for spamassassin
ordinary = false
elseif cur_param['header'] == 'ToCc' then
ordinary = false
else
table.insert(hdr_params, cur_param)
end
end
cur_rule['ordinary'] = ordinary and (not (#hdr_params > 1))
cur_rule['header'] = hdr_params
end
end
local function process_sa_conf(f)
local cur_rule = {}
local valid_rule = false
local function insert_cur_rule()
if not rules[cur_rule.type] then
rules[cur_rule.type] = {}
end
local target = rules[cur_rule.type]
if cur_rule.type == 'header' then
if not cur_rule.header[1].header then
rspamd_logger.errx(rspamd_config, 'bad rule definition: %1', cur_rule)
return
end
if not target[cur_rule.header[1].header] then
target[cur_rule.header[1].header] = {}
end
target = target[cur_rule.header[1].header]
end
if not cur_rule['symbol'] then
rspamd_logger.errx(rspamd_config, 'bad rule definition: %1', cur_rule)
return
end
target[cur_rule['symbol']] = cur_rule
cur_rule = {}
valid_rule = false
end
local function parse_score(words)
if #words == 3 then
-- score rule <x>
return tonumber(words[3])
elseif #words == 6 then
-- score rule <x1> <x2> <x3> <x4>
-- we assume here that bayes and network are enabled and select <x4>
return tonumber(words[6])
else
rspamd_logger.errx(rspamd_config, 'invalid score for %1', words[2])
end
return 0
end
local skip_to_endif = false
local if_nested = 0
for l in f:lines() do
(function ()
l = lua_util.rspamd_str_trim(l)
-- Replace bla=~/re/ with bla =~ /re/ (#2372)
l = l:gsub('([^%s])%s*([=!]~)%s*([^%s])', '%1 %2 %3')
if string.len(l) == 0 or string.sub(l, 1, 1) == '#' then
return
end
-- Unbalanced if/endif
if if_nested < 0 then if_nested = 0 end
if skip_to_endif then
if string.match(l, '^endif') then
if_nested = if_nested - 1
if if_nested == 0 then
skip_to_endif = false
end
elseif string.match(l, '^if') then
if_nested = if_nested + 1
elseif string.match(l, '^else') then
-- Else counterpart for if
skip_to_endif = false
end
table.insert(complicated, l)
return
else
if string.match(l, '^ifplugin') then
skip_to_endif = true
if_nested = if_nested + 1
table.insert(complicated, l)
elseif string.match(l, '^if !plugin%(') then
skip_to_endif = true
if_nested = if_nested + 1
table.insert(complicated, l)
elseif string.match(l, '^if') then
-- Unknown if
skip_to_endif = true
if_nested = if_nested + 1
table.insert(complicated, l)
elseif string.match(l, '^else') then
-- Else counterpart for if
skip_to_endif = true
table.insert(complicated, l)
elseif string.match(l, '^endif') then
if_nested = if_nested - 1
table.insert(complicated, l)
end
end
-- Skip comments
local words = fun.totable(fun.take_while(
function(w) return string.sub(w, 1, 1) ~= '#' end,
fun.filter(function(w)
return w ~= "" end,
fun.iter(split(l)))))
if words[1] == "header" then
-- header SYMBOL Header ~= /regexp/
if valid_rule then
insert_cur_rule()
end
if words[4] and (words[4] == '=~' or words[4] == '!~') then
cur_rule['type'] = 'header'
cur_rule['symbol'] = words[2]
if words[4] == '!~' then
table.insert(complicated, l)
return
end
cur_rule['re_expr'] = words_to_re(words, 4)
local unset_comp = string.find(cur_rule['re_expr'], '%s+%[if%-unset:')
if unset_comp then
table.insert(complicated, l)
return
end
cur_rule['re'] = rspamd_regexp.create(cur_rule['re_expr'])
if not cur_rule['re'] then
rspamd_logger.warnx(rspamd_config, "Cannot parse regexp '%1' for %2",
cur_rule['re_expr'], cur_rule['symbol'])
table.insert(complicated, l)
return
else
handle_header_def(words[3], cur_rule)
if not cur_rule['ordinary'] then
table.insert(complicated, l)
return
end
end
valid_rule = true
else
table.insert(complicated, l)
return
end
elseif words[1] == "body" then
-- body SYMBOL /regexp/
if valid_rule then
insert_cur_rule()
end
cur_rule['symbol'] = words[2]
if words[3] and (string.sub(words[3], 1, 1) == '/'
or string.sub(words[3], 1, 1) == 'm') then
cur_rule['type'] = 'sabody'
cur_rule['re_expr'] = words_to_re(words, 2)
cur_rule['re'] = rspamd_regexp.create(cur_rule['re_expr'])
if cur_rule['re'] then
valid_rule = true
end
else
-- might be function
table.insert(complicated, l)
return
end
elseif words[1] == "rawbody" then
-- body SYMBOL /regexp/
if valid_rule then
insert_cur_rule()
end
cur_rule['symbol'] = words[2]
if words[3] and (string.sub(words[3], 1, 1) == '/'
or string.sub(words[3], 1, 1) == 'm') then
cur_rule['type'] = 'sarawbody'
cur_rule['re_expr'] = words_to_re(words, 2)
cur_rule['re'] = rspamd_regexp.create(cur_rule['re_expr'])
if cur_rule['re'] then
valid_rule = true
end
else
table.insert(complicated, l)
return
end
elseif words[1] == "full" then
-- body SYMBOL /regexp/
if valid_rule then
insert_cur_rule()
end
cur_rule['symbol'] = words[2]
if words[3] and (string.sub(words[3], 1, 1) == '/'
or string.sub(words[3], 1, 1) == 'm') then
cur_rule['type'] = 'message'
cur_rule['re_expr'] = words_to_re(words, 2)
cur_rule['re'] = rspamd_regexp.create(cur_rule['re_expr'])
cur_rule['raw'] = true
if cur_rule['re'] then
valid_rule = true
end
else
table.insert(complicated, l)
return
end
elseif words[1] == "uri" then
-- uri SYMBOL /regexp/
if valid_rule then
insert_cur_rule()
end
cur_rule['type'] = 'uri'
cur_rule['symbol'] = words[2]
cur_rule['re_expr'] = words_to_re(words, 2)
cur_rule['re'] = rspamd_regexp.create(cur_rule['re_expr'])
if cur_rule['re'] and cur_rule['symbol'] then
valid_rule = true
else
table.insert(complicated, l)
return
end
elseif words[1] == "meta" then
-- meta SYMBOL expression
if valid_rule then
insert_cur_rule()
end
table.insert(complicated, l)
return
elseif words[1] == "describe" and valid_rule then
cur_rule['description'] = words_to_re(words, 2)
elseif words[1] == "score" then
scores[words[2]] = parse_score(words)
else
table.insert(complicated, l)
return
end
end)()
end
if valid_rule then
insert_cur_rule()
end
end
for _,matched in ipairs(arg) do
local f = io.open(matched, "r")
if f then
rspamd_logger.messagex(rspamd_config, 'loading SA rules from %s', matched)
process_sa_conf(f)
else
rspamd_logger.errx(rspamd_config, "cannot open %1", matched)
end
end
local multimap_conf = {}
local function handle_rule(what, syms, hdr)
local mtype
local filter
local fname
local header
local sym = what:upper()
if what == 'sabody' then
mtype = 'content'
fname = 'body_re.map'
filter = 'oneline'
elseif what == 'sarawbody' then
fname = 'raw_body_re.map'
mtype = 'content'
filter = 'rawtext'
elseif what == 'full' then
fname = 'full_re.map'
mtype = 'content'
filter = 'full'
elseif what == 'uri' then
fname = 'uri_re.map'
mtype = 'url'
filter = 'full'
elseif what == 'header' then
fname = ('hdr_' .. hdr .. '_re.map'):lower()
mtype = 'header'
header = hdr
sym = sym .. '_' .. hdr:upper()
else
rspamd_logger.errx('unknown type: %s', what)
return
end
local conf = {
type = mtype,
filter = filter,
symbol = 'SA_MAP_AUTO_' .. sym,
regexp = true,
map = fname,
header = header,
symbols = {}
}
local re_file = io.open(fname, 'w')
for k,r in pairs(syms) do
local score = 0.0
if scores[k] then
score = scores[k]
end
re_file:write(string.format('/%s/ %s:%f\n', tostring(r.re), k, score))
table.insert(conf.symbols, k)
end
re_file:close()
multimap_conf[sym:lower()] = conf
rspamd_logger.messagex('stored %s regexp in %s', sym:lower(), fname)
end
for k,v in pairs(rules) do
if k == 'header' then
for h,r in pairs(v) do
handle_rule(k, r, h)
end
else
handle_rule(k, v)
end
end
local out = ucl.to_format(multimap_conf, 'ucl')
local mmap_conf = io.open('auto_multimap.conf', 'w')
mmap_conf:write(out)
mmap_conf:close()
rspamd_logger.messagex('stored multimap conf in %s', 'auto_multimap.conf')
local sa_remain = io.open('auto_sa.conf', 'w')
fun.each(function(l)
sa_remain:write(l)
sa_remain:write('\n')
end, fun.filter(function(l) return not string.match(l, '^%s+$') end, complicated))
sa_remain:close()
rspamd_logger.messagex('stored sa remains conf in %s', 'auto_sa.conf')

View file

@ -0,0 +1,12 @@
#!/bin/bash
password_file='/etc/rspamd/override.d/worker-controller-password.inc'
password_hash=`/usr/bin/rspamadm pw -e -p $1`
echo 'enable_password = "'$password_hash'";' > $password_file
if grep -q "$password_hash" "$password_file"; then
echo "OK"
else
echo "ERROR"
fi

View file

@ -0,0 +1 @@
settings = "http://nginx:8081/settings.php";

View file

@ -0,0 +1,58 @@
FROM debian:bookworm-slim
LABEL maintainer="The Infrastructure Company GmbH <info@servercow.de>"
ARG DEBIAN_FRONTEND=noninteractive
ARG DEBIAN_VERSION=bookworm
ARG SOGO_DEBIAN_REPOSITORY=http://www.axis.cz/linux/debian
# renovate: datasource=github-releases depName=tianon/gosu versioning=semver-coerced extractVersion=^(?<version>.*)$
ARG GOSU_VERSION=1.17
ENV LC_ALL=C
# Prerequisites
RUN echo "Building from repository $SOGO_DEBIAN_REPOSITORY" \
&& apt-get update && apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
gettext \
gnupg \
mariadb-client \
rsync \
supervisor \
syslog-ng \
syslog-ng-core \
syslog-ng-mod-redis \
dirmngr \
netcat-traditional \
psmisc \
wget \
patch \
&& dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true \
&& mkdir /usr/share/doc/sogo \
&& touch /usr/share/doc/sogo/empty.sh \
&& apt-key adv --keyserver keys.openpgp.org --recv-key 74FFC6D72B925A34B5D356BDF8A27B36A6E2EAE9 \
&& echo "deb [trusted=yes] ${SOGO_DEBIAN_REPOSITORY} ${DEBIAN_VERSION} sogo-v5" > /etc/apt/sources.list.d/sogo.list \
&& apt-get update && apt-get install -y --no-install-recommends \
sogo \
sogo-activesync \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/sogo.list \
&& touch /etc/default/locale
COPY ./bootstrap-sogo.sh /bootstrap-sogo.sh
COPY syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY supervisord.conf /etc/supervisor/supervisord.conf
COPY acl.diff /acl.diff
COPY stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY docker-entrypoint.sh /
RUN chmod +x /bootstrap-sogo.sh \
/usr/local/sbin/stop-supervisor.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

11
data/Dockerfiles/sogo/acl.diff Executable file
View file

@ -0,0 +1,11 @@
--- /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox 2018-08-17 18:29:57.987504204 +0200
+++ /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox 2018-08-17 18:29:35.918291298 +0200
@@ -46,7 +46,7 @@
</md-item-template>
</md-autocomplete>
</div>
- <md-card ng-repeat="user in acl.users | orderBy:['userClass', 'cn']"
+ <md-card ng-repeat="user in acl.users | filter:{ userClass: 'normal' } | orderBy:['cn']"
class="sg-collapsed"
ng-class="{ 'sg-expanded': user.uid == acl.selectedUid }">
<a class="md-flex md-button" ng-click="acl.selectUser(user, $event)">

View file

@ -0,0 +1,253 @@
#!/bin/bash
# Wait for MySQL to warm-up
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for database to come up..."
sleep 2
done
# Wait until port becomes free and send sig
until ! nc -z sogo-mailcow 20000;
do
killall -TERM sogod
sleep 3
done
# Wait for updated schema
DBV_NOW=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT version FROM versions WHERE application = 'db_schema';" -BN)
DBV_NEW=$(grep -oE '\$db_version = .*;' init_db.inc.php | sed 's/$db_version = //g;s/;//g' | cut -d \" -f2)
while [[ "${DBV_NOW}" != "${DBV_NEW}" ]]; do
echo "Waiting for schema update..."
DBV_NOW=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT version FROM versions WHERE application = 'db_schema';" -BN)
DBV_NEW=$(grep -oE '\$db_version = .*;' init_db.inc.php | sed 's/$db_version = //g;s/;//g' | cut -d \" -f2)
sleep 5
done
echo "DB schema is ${DBV_NOW}"
# Recreate view
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "We are master, preparing sogo_view..."
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "DROP VIEW IF EXISTS sogo_view"
while [[ ${VIEW_OK} != 'OK' ]]; do
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
CREATE VIEW sogo_view (c_uid, domain, c_name, c_password, c_cn, mail, aliases, ad_aliases, ext_acl, kind, multiple_bookings) AS
SELECT
mailbox.username,
mailbox.domain,
mailbox.username,
IF(JSON_UNQUOTE(JSON_VALUE(attributes, '$.force_pw_update')) = '0', IF(JSON_UNQUOTE(JSON_VALUE(attributes, '$.sogo_access')) = 1, password, '{SSHA256}A123A123A321A321A321B321B321B123B123B321B432F123E321123123321321'), '{SSHA256}A123A123A321A321A321B321B321B123B123B321B432F123E321123123321321'),
mailbox.name,
mailbox.username,
IFNULL(GROUP_CONCAT(ga.aliases ORDER BY ga.aliases SEPARATOR ' '), ''),
IFNULL(gda.ad_alias, ''),
IFNULL(external_acl.send_as_acl, ''),
mailbox.kind,
mailbox.multiple_bookings
FROM
mailbox
LEFT OUTER JOIN
grouped_mail_aliases ga
ON ga.username REGEXP CONCAT('(^|,)', mailbox.username, '($|,)')
LEFT OUTER JOIN
grouped_domain_alias_address gda
ON gda.username = mailbox.username
LEFT OUTER JOIN
grouped_sender_acl_external external_acl
ON external_acl.username = mailbox.username
WHERE
mailbox.active = '1'
GROUP BY
mailbox.username;
EOF
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'sogo_view'") ]]; then
VIEW_OK=OK
else
echo "Will retry to setup SOGo view in 3s..."
sleep 3
fi
done
else
while [[ ${VIEW_OK} != 'OK' ]]; do
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'sogo_view'") ]]; then
VIEW_OK=OK
else
echo "Waiting for SOGo view to be created by master..."
sleep 3
fi
done
fi
# Wait for static view table if missing after update and update content
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "We are master, preparing _sogo_static_view..."
while [[ ${STATIC_VIEW_OK} != 'OK' ]]; do
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '_sogo_static_view'") ]]; then
STATIC_VIEW_OK=OK
echo "Updating _sogo_static_view content..."
# If changed, also update init_db.inc.php
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "REPLACE INTO _sogo_static_view (c_uid, domain, c_name, c_password, c_cn, mail, aliases, ad_aliases, ext_acl, kind, multiple_bookings) SELECT c_uid, domain, c_name, c_password, c_cn, mail, aliases, ad_aliases, ext_acl, kind, multiple_bookings from sogo_view;"
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "DELETE FROM _sogo_static_view WHERE c_uid NOT IN (SELECT username FROM mailbox WHERE active = '1')"
else
echo "Waiting for database initialization..."
sleep 3
fi
done
else
while [[ ${STATIC_VIEW_OK} != 'OK' ]]; do
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '_sogo_static_view'") ]]; then
STATIC_VIEW_OK=OK
else
echo "Waiting for database initialization by master..."
sleep 3
fi
done
fi
# Recreate password update trigger
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "We are master, preparing update trigger..."
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "DROP TRIGGER IF EXISTS sogo_update_password"
while [[ ${TRIGGER_OK} != 'OK' ]]; do
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
DELIMITER -
CREATE TRIGGER sogo_update_password AFTER UPDATE ON _sogo_static_view
FOR EACH ROW
BEGIN
UPDATE mailbox SET password = NEW.c_password WHERE NEW.c_uid = username;
END;
-
DELIMITER ;
EOF
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TRIGGERS WHERE TRIGGER_NAME = 'sogo_update_password'") ]]; then
TRIGGER_OK=OK
else
echo "Will retry to setup SOGo password update trigger in 3s"
sleep 3
fi
done
fi
# cat /dev/urandom seems to hang here occasionally and is not recommended anyway, better use openssl
RAND_PASS=$(openssl rand -base64 16 | tr -dc _A-Z-a-z-0-9)
# Generate plist header with timezone data
mkdir -p /var/lib/sogo/GNUstep/Defaults/
cat <<EOF > /var/lib/sogo/GNUstep/Defaults/sogod.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//GNUstep//DTD plist 0.9//EN" "http://www.gnustep.org/plist-0_9.xml">
<plist version="0.9">
<dict>
<key>OCSAclURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_acl</string>
<key>SOGoIMAPServer</key>
<string>imap://${IPV4_NETWORK}.250:143/?TLS=YES&amp;tlsVerifyMode=none</string>
<key>SOGoSieveServer</key>
<string>sieve://${IPV4_NETWORK}.250:4190/?TLS=YES&amp;tlsVerifyMode=none</string>
<key>SOGoSMTPServer</key>
<string>smtp://${IPV4_NETWORK}.253:588/?TLS=YES&amp;tlsVerifyMode=none</string>
<key>SOGoTrustProxyAuthentication</key>
<string>YES</string>
<key>SOGoEncryptionKey</key>
<string>${RAND_PASS}</string>
<key>OCSAdminURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_admin</string>
<key>OCSCacheFolderURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_cache_folder</string>
<key>OCSEMailAlarmsFolderURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_alarms_folder</string>
<key>OCSFolderInfoURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_folder_info</string>
<key>OCSSessionsFolderURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_sessions_folder</string>
<key>OCSStoreURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_store</string>
<key>SOGoProfileURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_user_profile</string>
<key>SOGoTimeZone</key>
<string>${TZ}</string>
<key>domains</key>
<dict>
EOF
# Generate multi-domain setup
while read -r line gal
do
echo " <key>${line}</key>
<dict>
<key>SOGoMailDomain</key>
<string>${line}</string>
<key>SOGoUserSources</key>
<array>
<dict>
<key>MailFieldNames</key>
<array>
<string>aliases</string>
<string>ad_aliases</string>
<string>ext_acl</string>
</array>
<key>KindFieldName</key>
<string>kind</string>
<key>DomainFieldName</key>
<string>domain</string>
<key>MultipleBookingsFieldName</key>
<string>multiple_bookings</string>
<key>listRequiresDot</key>
<string>NO</string>
<key>canAuthenticate</key>
<string>YES</string>
<key>displayName</key>
<string>GAL ${line}</string>
<key>id</key>
<string>${line}</string>
<key>isAddressBook</key>
<string>${gal}</string>
<key>type</key>
<string>sql</string>
<key>userPasswordAlgorithm</key>
<string>${MAILCOW_PASS_SCHEME}</string>
<key>prependPasswordScheme</key>
<string>YES</string>
<key>viewURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/_sogo_static_view</string>
</dict>" >> /var/lib/sogo/GNUstep/Defaults/sogod.plist
# Generate alternative LDAP authentication dict, when SQL authentication fails
# This will nevertheless read attributes from LDAP
line=${line} envsubst < /etc/sogo/plist_ldap >> /var/lib/sogo/GNUstep/Defaults/sogod.plist
echo " </array>
</dict>" >> /var/lib/sogo/GNUstep/Defaults/sogod.plist
done < <(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT domain, CASE gal WHEN '1' THEN 'YES' ELSE 'NO' END AS gal FROM domain;" -B -N)
# Generate footer
echo ' </dict>
</dict>
</plist>' >> /var/lib/sogo/GNUstep/Defaults/sogod.plist
# Fix permissions
chown sogo:sogo -R /var/lib/sogo/
chmod 600 /var/lib/sogo/GNUstep/Defaults/sogod.plist
# Patch ACLs
#if [[ ${ACL_ANYONE} == 'allow' ]]; then
# #enable any or authenticated targets for ACL
# if patch -R -sfN --dry-run /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox < /acl.diff > /dev/null; then
# patch -R /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox < /acl.diff;
# fi
#else
# #disable any or authenticated targets for ACL
# if patch -sfN --dry-run /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox < /acl.diff > /dev/null; then
# patch /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox < /acl.diff;
# fi
#fi
# Copy logo, if any
[[ -f /etc/sogo/sogo-full.svg ]] && cp /etc/sogo/sogo-full.svg /usr/lib/GNUstep/SOGo/WebServerResources/img/sogo-full.svg
# Rsync web content
echo "Syncing web content with named volume"
rsync -a /usr/lib/GNUstep/SOGo/. /sogo_web/
# Chown backup path
chown -R sogo:sogo /sogo_backup
exec gosu sogo /usr/sbin/sogod

View file

@ -0,0 +1,21 @@
#!/bin/bash
if [[ "${SKIP_SOGO}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "SKIP_SOGO=y, skipping SOGo..."
sleep 365d
exit 0
fi
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
cp /etc/syslog-ng/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng.conf
fi
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
exec "$@"

View file

@ -0,0 +1,8 @@
#!/bin/bash
printf "READY\n";
while read line; do
echo "Processing Event: $line" >&2;
kill -3 $(cat "/var/run/supervisord.pid")
done < /dev/stdin

View file

@ -0,0 +1,27 @@
[supervisord]
nodaemon=true
user=root
[program:syslog-ng]
command=/usr/sbin/syslog-ng --foreground --no-caps
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
priority=1
[program:bootstrap-sogo]
command=/bootstrap-sogo.sh
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
priority=2
startretries=10
autorestart=true
stopwaitsecs=120
[eventlistener:processes]
command=/usr/local/sbin/stop-supervisor.sh
events=PROCESS_STATE_STOPPED, PROCESS_STATE_EXITED, PROCESS_STATE_FATAL

View file

@ -0,0 +1,45 @@
@version: 3.38
@include "scl.conf"
options {
chain_hostnames(off);
flush_lines(0);
use_dns(no);
use_fqdn(no);
owner("root"); group("adm"); perm(0640);
stats_freq(0);
bad_hostname("^gconfd$");
};
source s_src {
unix-stream("/dev/log");
internal();
};
source s_sogo {
pipe("/dev/sogo_log" owner(sogo) group(sogo));
};
destination d_stdout { pipe("/dev/stdout"); };
destination d_redis_ui_log {
redis(
host("`REDIS_SLAVEOF_IP`")
persist-name("redis1")
port(`REDIS_SLAVEOF_PORT`)
command("LPUSH" "SOGO_LOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
destination d_redis_f2b_channel {
redis(
host("`REDIS_SLAVEOF_IP`")
persist-name("redis2")
port(`REDIS_SLAVEOF_PORT`)
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};
log {
source(s_sogo);
destination(d_redis_ui_log);
destination(d_redis_f2b_channel);
};
log {
source(s_sogo);
source(s_src);
destination(d_stdout);
};

View file

@ -0,0 +1,45 @@
@version: 3.38
@include "scl.conf"
options {
chain_hostnames(off);
flush_lines(0);
use_dns(no);
use_fqdn(no);
owner("root"); group("adm"); perm(0640);
stats_freq(0);
bad_hostname("^gconfd$");
};
source s_src {
unix-stream("/dev/log");
internal();
};
source s_sogo {
pipe("/dev/sogo_log" owner(sogo) group(sogo));
};
destination d_stdout { pipe("/dev/stdout"); };
destination d_redis_ui_log {
redis(
host("redis-mailcow")
persist-name("redis1")
port(6379)
command("LPUSH" "SOGO_LOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
destination d_redis_f2b_channel {
redis(
host("redis-mailcow")
persist-name("redis2")
port(6379)
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};
log {
source(s_sogo);
destination(d_redis_ui_log);
destination(d_redis_f2b_channel);
};
log {
source(s_sogo);
source(s_src);
destination(d_stdout);
};

View file

@ -0,0 +1,31 @@
FROM solr:7.7-slim
USER root
# renovate: datasource=github-releases depName=tianon/gosu versioning=semver-coerced extractVersion=(?<version>.*)$
ARG GOSU_VERSION=1.17
COPY solr.sh /
COPY solr-config-7.7.0.xml /
COPY solr-schema-7.7.0.xml /
RUN dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true \
&& apt-get update && apt-get install -y --no-install-recommends \
tzdata \
curl \
bash \
zip \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/* \
&& chmod +x /solr.sh \
&& sync \
&& bash /solr.sh --bootstrap
RUN zip -q -d /opt/solr/server/lib/ext/log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
RUN apt remove zip -y
CMD ["/solr.sh"]

View file

@ -0,0 +1,289 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!-- This is the default config with stuff non-essential to Dovecot removed. -->
<config>
<!-- Controls what version of Lucene various components of Solr
adhere to. Generally, you want to use the latest version to
get all bug fixes and improvements. It is highly recommended
that you fully re-index after changing this setting as it can
affect both how text is indexed and queried.
-->
<luceneMatchVersion>7.7.0</luceneMatchVersion>
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
When a 'regex' is specified in addition to a 'dir', only the
files in that directory which completely match the regex
(anchored on both ends) will be included.
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-cell-\d.*\.jar" />
<lib dir="${solr.install.dir:../../../..}/contrib/clustering/lib/" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-clustering-\d.*\.jar" />
<lib dir="${solr.install.dir:../../../..}/contrib/langid/lib/" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-\d.*\.jar" />
<lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- Data Directory
Used to specify an alternate directory to hold all index data
other than the default ./data under the Solr home. If
replication is in use, this should match the replication
configuration.
-->
<dataDir>${solr.data.dir:}</dataDir>
<!-- The default high-performance update handler -->
<updateHandler class="solr.DirectUpdateHandler2">
<!-- Enables a transaction log, used for real-time get, durability, and
and solr cloud replica recovery. The log can grow as big as
uncommitted changes to the index, so use of a hard autoCommit
is recommended (see below).
"dir" - the target directory for transaction logs, defaults to the
solr data directory.
"numVersionBuckets" - sets the number of buckets used to keep
track of max version values when checking for re-ordered
updates; increase this value to reduce the cost of
synchronizing access to version buckets during high-volume
indexing, this requires 8 bytes (long) * numVersionBuckets
of heap space per Solr core.
-->
<updateLog>
<str name="dir">${solr.ulog.dir:}</str>
<int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
</updateLog>
<!-- AutoCommit
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
maxDocs - Maximum number of documents to add since the last
commit before automatically triggering a new commit.
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
If the updateLog is enabled, then it's highly recommended to
have some sort of hard autoCommit to limit the log size.
-->
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<!-- softAutoCommit is like autoCommit except it causes a
'soft' commit which only ensures that changes are visible
but does not ensure that data is synced to disk. This is
faster and more near-realtime friendly than a hard commit.
-->
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
postCommit - fired after every commit or optimize command
postOptimize - fired after every optimize command
-->
</updateHandler>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Query section - these settings control query time things like caches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<query>
<!-- Solr Internal Query Caches
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
when the hit ratio of the cache is high (> 75%), and may be
faster under other scenarios on multi-cpu systems.
-->
<!-- Filter Cache
Cache used by SolrIndexSearcher for filters (DocSets),
unordered sets of *all* documents that match a query. When a
new searcher is opened, its caches may be prepopulated or
"autowarmed" using data from caches in the old searcher.
autowarmCount is the number of items to prepopulate. For
LRUCache, the autowarmed items will be the most recently
accessed items.
Parameters:
class - the SolrCache implementation LRUCache or
(LRUCache or FastLRUCache)
size - the maximum number of entries in the cache
initialSize - the initial capacity (number of entries) of
the cache. (see java.util.HashMap)
autowarmCount - the number of entries to prepopulate from
and old cache.
maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
to occupy. Note that when this option is specified, the size
and initialSize parameters are ignored.
-->
<filterCache class="solr.FastLRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
Additional supported parameter by LRUCache:
maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
to occupy
-->
<queryResultCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Document Cache
Caches Lucene Document objects (the stored fields for each
document). Since Lucene internal document ids are transient,
this cache will not be autowarmed.
-->
<documentCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- custom cache currently used by block join -->
<cache name="perSegFilter"
class="solr.search.LRUCache"
size="10"
initialSize="0"
autowarmCount="10"
regenerator="solr.NoOpRegenerator" />
<!-- Lazy Field Loading
If true, stored fields that are not requested will be loaded
lazily. This can result in a significant speed improvement
if the usual case is to not load all stored fields,
especially if the skipped fields are large compressed text
fields.
-->
<enableLazyFieldLoading>true</enableLazyFieldLoading>
<!-- Result Window Size
An optimization for use with the queryResultCache. When a search
is requested, a superset of the requested number of document ids
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
<!-- Use Cold Searcher
If a search request comes in and there is no current
registered searcher, then immediately register the still
warming searcher and use it. If "false" then all requests
will block until the first searcher is done warming.
-->
<useColdSearcher>false</useColdSearcher>
</query>
<!-- Request Dispatcher
This section contains instructions for how the SolrDispatchFilter
should behave when processing requests for this SolrCore.
-->
<requestDispatcher>
<httpCaching never304="true" />
</requestDispatcher>
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
Incoming queries will be dispatched to a specific handler by name
based on the path specified in the request.
If a Request Handler is declared with startup="lazy", then it will
not be initialized until the first request that uses it.
-->
<!-- SearchHandler
http://wiki.apache.org/solr/SearchHandler
For processing Search Queries, the primary Request Handler
provided with Solr is "SearchHandler" It delegates to a sequent
of SearchComponents (see below) and supports distributed
queries across multiple shards
-->
<requestHandler name="/select" class="solr.SearchHandler">
<!-- default values for query parameters can be specified, these
will be overridden by parameters in the request
-->
<lst name="defaults">
<str name="echoParams">explicit</str>
<int name="rows">10</int>
</lst>
</requestHandler>
<initParams path="/update/**,/select">
<lst name="defaults">
<str name="df">_text_</str>
</lst>
</initParams>
<!-- Response Writers
http://wiki.apache.org/solr/QueryResponseWriter
Request responses will be written using the writer specified by
the 'wt' request parameter matching the name of a registered
writer.
The "default" writer is the default and will be used if 'wt' is
not specified in the request.
-->
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
</config>

View file

@ -0,0 +1,49 @@
<?xml version="1.0" encoding="UTF-8"?>
<schema name="dovecot-fts" version="2.0">
<fieldType name="string" class="solr.StrField" omitNorms="true" sortMissingLast="true"/>
<fieldType name="long" class="solr.LongPointField" positionIncrementGap="0"/>
<fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
<fieldType name="text" class="solr.TextField" autoGeneratePhraseQueries="true" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.EdgeNGramFilterFactory" minGramSize="3" maxGramSize="20"/>
<filter class="solr.StopFilterFactory" words="stopwords.txt" ignoreCase="true"/>
<filter class="solr.WordDelimiterGraphFilterFactory" catenateNumbers="1" generateNumberParts="1" splitOnCaseChange="1" generateWordParts="1" splitOnNumerics="1" catenateAll="1" catenateWords="1"/>
<filter class="solr.FlattenGraphFilterFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<filter class="solr.PorterStemFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.SynonymGraphFilterFactory" expand="true" ignoreCase="true" synonyms="synonyms.txt"/>
<filter class="solr.FlattenGraphFilterFactory"/>
<filter class="solr.StopFilterFactory" words="stopwords.txt" ignoreCase="true"/>
<filter class="solr.WordDelimiterGraphFilterFactory" catenateNumbers="1" generateNumberParts="1" splitOnCaseChange="1" generateWordParts="1" splitOnNumerics="1" catenateAll="1" catenateWords="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<filter class="solr.PorterStemFilterFactory"/>
</analyzer>
</fieldType>
<field name="id" type="string" indexed="true" required="true" stored="true"/>
<field name="uid" type="long" indexed="true" required="true" stored="true"/>
<field name="box" type="string" indexed="true" required="true" stored="true"/>
<field name="user" type="string" indexed="true" required="true" stored="true"/>
<field name="hdr" type="text" indexed="true" stored="false"/>
<field name="body" type="text" indexed="true" stored="false"/>
<field name="from" type="text" indexed="true" stored="false"/>
<field name="to" type="text" indexed="true" stored="false"/>
<field name="cc" type="text" indexed="true" stored="false"/>
<field name="bcc" type="text" indexed="true" stored="false"/>
<field name="subject" type="text" indexed="true" stored="false"/>
<!-- Used by Solr internally: -->
<field name="_version_" type="long" indexed="true" stored="true"/>
<uniqueKey>id</uniqueKey>
</schema>

75
data/Dockerfiles/solr/solr.sh Executable file
View file

@ -0,0 +1,75 @@
#!/bin/bash
if [[ "${FLATCURVE_EXPERIMENTAL}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "FLATCURVE_EXPERIMENTAL=y, skipping Solr but enabling Flatcurve as FTS for Dovecot!"
echo "Solr will be removed in the future!"
sleep 365d
exit 0
elif [[ "${SKIP_SOLR}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "SKIP_SOLR=y, skipping Solr..."
echo "HINT: You could try the newer FTS Backend Flatcurve, which is currently in experimental state..."
echo "Simply set FLATCURVE_EXPERIMENTAL=y inside your mailcow.conf and restart the stack afterwards!"
echo "Solr will be removed in the future!"
sleep 365d
exit 0
fi
MEM_TOTAL=$(awk '/MemTotal/ {print $2}' /proc/meminfo)
if [[ "${1}" != "--bootstrap" ]]; then
if [ ${MEM_TOTAL} -lt "2097152" ]; then
echo "System memory less than 2 GB, skipping Solr..."
sleep 365d
exit 0
fi
fi
set -e
# run the optional initdb
. /opt/docker-solr/scripts/run-initdb
# fixing volume permission
[[ -d /opt/solr/server/solr/dovecot-fts/data ]] && chown -R solr:solr /opt/solr/server/solr/dovecot-fts/data
if [[ "${1}" != "--bootstrap" ]]; then
sed -i '/SOLR_HEAP=/c\SOLR_HEAP="'${SOLR_HEAP:-1024}'m"' /opt/solr/bin/solr.in.sh
else
sed -i '/SOLR_HEAP=/c\SOLR_HEAP="256m"' /opt/solr/bin/solr.in.sh
fi
if [[ "${1}" == "--bootstrap" ]]; then
echo "Creating initial configuration"
echo "Modifying default config set"
cp /solr-config-7.7.0.xml /opt/solr/server/solr/configsets/_default/conf/solrconfig.xml
cp /solr-schema-7.7.0.xml /opt/solr/server/solr/configsets/_default/conf/schema.xml
rm /opt/solr/server/solr/configsets/_default/conf/managed-schema
echo "Starting local Solr instance to setup configuration"
gosu solr start-local-solr
echo "Creating core \"dovecot-fts\""
gosu solr /opt/solr/bin/solr create -c "dovecot-fts"
# See https://github.com/docker-solr/docker-solr/issues/27
echo "Checking core"
while ! wget -O - 'http://localhost:8983/solr/admin/cores?action=STATUS' | grep -q instanceDir; do
echo "Could not find any cores, waiting..."
sleep 3
done
echo "Created core \"dovecot-fts\""
echo "Stopping local Solr"
gosu solr stop-local-solr
exit 0
fi
echo "Starting up Solr..."
echo -e "\e[31mSolr is deprecated! You can try the new FTS System now by enabling FLATCURVE_EXPERIMENTAL=y inside mailcow.conf and restarting the stack\e[0m"
echo -e "\e[31mSolr will be removed completely soon!\e[0m"
sleep 15
exec gosu solr solr-foreground

View file

@ -0,0 +1,36 @@
FROM alpine:3.20
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
RUN apk add --update --no-cache \
curl \
bind-tools \
coreutils \
unbound \
bash \
openssl \
drill \
tzdata \
syslog-ng \
supervisor \
&& curl -o /etc/unbound/root.hints https://www.internic.net/domain/named.cache \
&& chown root:unbound /etc/unbound \
&& adduser unbound tty \
&& chmod 775 /etc/unbound
EXPOSE 53/udp 53/tcp
COPY docker-entrypoint.sh /docker-entrypoint.sh
# healthcheck (dig, ping)
COPY healthcheck.sh /healthcheck.sh
COPY syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY supervisord.conf /etc/supervisor/supervisord.conf
COPY stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
RUN chmod +x /healthcheck.sh
HEALTHCHECK --interval=30s --timeout=10s \
CMD sh -c '[ -f /tmp/healthcheck_status ] && [ "$(cat /tmp/healthcheck_status)" -eq 0 ] || exit 1'
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

View file

@ -0,0 +1,20 @@
#!/bin/bash
echo "Setting console permissions..."
chown root:tty /dev/console
chmod g+rw /dev/console
echo "Receiving anchor key..."
/usr/sbin/unbound-anchor -a /etc/unbound/trusted-key.key
echo "Receiving root hints..."
curl -#o /etc/unbound/root.hints https://www.internic.net/domain/named.cache
/usr/sbin/unbound-control-setup
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
exec "$@"

View file

@ -0,0 +1,102 @@
#!/bin/bash
STATUS_FILE="/tmp/healthcheck_status"
RUNS=0
# Declare log function for logfile to stdout
function log_to_stdout() {
echo "$(date +"%Y-%m-%d %H:%M:%S"): $1"
}
# General Ping function to check general pingability
function check_ping() {
declare -a ipstoping=("1.1.1.1" "8.8.8.8" "9.9.9.9")
local fail_tolerance=1
local failures=0
for ip in "${ipstoping[@]}" ; do
success=false
for ((i=1; i<=3; i++)); do
ping -q -c 3 -w 5 "$ip" > /dev/null
if [ $? -eq 0 ]; then
success=true
break
else
log_to_stdout "Healthcheck: Failed to ping $ip on attempt $i. Trying again..."
fi
done
if [ "$success" = false ]; then
log_to_stdout "Healthcheck: Couldn't ping $ip after 3 attempts. Marking this IP as failed."
((failures++))
fi
done
if [ $failures -gt $fail_tolerance ]; then
log_to_stdout "Healthcheck: Too many ping failures ($fail_tolerance failures allowed, you got $failures failures), marking Healthcheck as unhealthy..."
return 1
fi
return 0
}
# General DNS Resolve Check against Unbound Resolver himself
function check_dns() {
declare -a domains=("fuzzy.mailcow.email" "github.com" "hub.docker.com")
local fail_tolerance=1
local failures=0
for domain in "${domains[@]}" ; do
success=false
for ((i=1; i<=3; i++)); do
dig_output=$(dig +short +timeout=2 +tries=1 "$domain" @127.0.0.1 2>/dev/null)
dig_rc=$?
if [ $dig_rc -ne 0 ] || [ -z "$dig_output" ]; then
log_to_stdout "Healthcheck: DNS Resolution Failed on attempt $i for $domain! Trying again..."
else
success=true
break
fi
done
if [ "$success" = false ]; then
log_to_stdout "Healthcheck: DNS Resolution not possible after 3 attempts for $domain... Gave up!"
((failures++))
fi
done
if [ $failures -gt $fail_tolerance ]; then
log_to_stdout "Healthcheck: Too many DNS failures ($fail_tolerance failures allowed, you got $failures failures), marking Healthcheck as unhealthy..."
return 1
fi
return 0
}
while true; do
if [[ ${SKIP_UNBOUND_HEALTHCHECK} == "y" ]]; then
log_to_stdout "Healthcheck: ALL CHECKS WERE SKIPPED! Unbound is healthy!"
echo "0" > $STATUS_FILE
sleep 365d
fi
# run checks, if check is not returning 0 (return value if check is ok), healthcheck will exit with 1 (marked in docker as unhealthy)
check_ping
PING_STATUS=$?
check_dns
DNS_STATUS=$?
if [ $PING_STATUS -ne 0 ] || [ $DNS_STATUS -ne 0 ]; then
echo "1" > $STATUS_FILE
else
echo "0" > $STATUS_FILE
fi
sleep 30
done

View file

@ -0,0 +1,10 @@
#!/bin/bash
printf "READY\n";
while read line; do
echo "Processing Event: $line" >&2;
kill -3 $(cat "/var/run/supervisord.pid")
done < /dev/stdin
rm -rf /tmp/healthcheck_status

View file

@ -0,0 +1,32 @@
[supervisord]
nodaemon=true
user=root
pidfile=/var/run/supervisord.pid
[program:syslog-ng]
command=/usr/sbin/syslog-ng --foreground --no-caps
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
[program:unbound]
command=/usr/sbin/unbound
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=true
[program:unbound-healthcheck]
command=/bin/bash /healthcheck.sh
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=true
[eventlistener:processes]
command=/usr/local/sbin/stop-supervisor.sh
events=PROCESS_STATE_STOPPED, PROCESS_STATE_EXITED, PROCESS_STATE_FATAL

View file

@ -0,0 +1,21 @@
@version: 4.5
@include "scl.conf"
options {
chain_hostnames(off);
flush_lines(0);
use_dns(no);
use_fqdn(no);
owner("root"); group("adm"); perm(0640);
stats(freq(0));
keep_timestamp(no);
bad_hostname("^gconfd$");
};
source s_dgram {
unix-dgram("/dev/log");
internal();
};
destination d_stdout { pipe("/dev/stdout"); };
log {
source(s_dgram);
destination(d_stdout);
};

View file

@ -0,0 +1,40 @@
FROM alpine:3.20
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
# Installation
RUN apk add --update \
&& apk add --no-cache nagios-plugins-smtp \
nagios-plugins-tcp \
nagios-plugins-http \
nagios-plugins-ping \
mariadb-client \
curl \
bash \
coreutils \
jq \
fcgi \
openssl \
nagios-plugins-mysql \
nagios-plugins-dns \
nagios-plugins-disk \
bind-tools \
redis \
perl \
perl-net-dns \
perl-io-socket-ssl \
perl-io-socket-inet6 \
perl-socket \
perl-socket6 \
perl-mime-lite \
perl-term-readkey \
tini \
tzdata \
whois \
&& curl https://raw.githubusercontent.com/mludvig/smtp-cli/v3.10/smtp-cli -o /smtp-cli \
&& chmod +x smtp-cli
COPY watchdog.sh /watchdog.sh
COPY check_mysql_slavestatus.sh /usr/lib/nagios/plugins/check_mysql_slavestatus.sh
CMD ["/watchdog.sh"]

View file

@ -0,0 +1,223 @@
#!/bin/bash
#########################################################################
# Script: check_mysql_slavestatus.sh #
# Author: Claudio Kuenzler www.claudiokuenzler.com #
# Purpose: Monitor MySQL Replication status with Nagios #
# Description: Connects to given MySQL hosts and checks for running #
# SLAVE state and delivers additional info #
# Original: This script is a modified version of #
# check mysql slave sql running written by dhirajt #
# Thanks to: Victor Balada Diaz for his ideas added on 20080930 #
# Soren Klintrup for stuff added on 20081015 #
# Marc Feret for Slave_IO_Running check 20111227 #
# Peter Lecki for his mods added on 20120803 #
# Serge Victor for his mods added on 20131223 #
# Omri Bahumi for his fix added on 20131230 #
# Marc Falzon for his option mods added on 20190822 #
# Andreas Pfeiffer for adding socket option on 20190822 #
# History: #
# 2008041700 Original Script modified #
# 2008041701 Added additional info if status OK #
# 2008041702 Added usage of script with params -H -u -p #
# 2008041703 Added bindir variable for multiple platforms #
# 2008041704 Added help because mankind needs help #
# 2008093000 Using /bin/sh instead of /bin/bash #
# 2008093001 Added port for MySQL server #
# 2008093002 Added mysqldir if mysql binary is elsewhere #
# 2008101501 Changed bindir/mysqldir to use PATH #
# 2008101501 Use $() instead of `` to avoid forks #
# 2008101501 Use ${} for variables to prevent problems #
# 2008101501 Check if required commands exist #
# 2008101501 Check if mysql connection works #
# 2008101501 Exit with unknown status at script end #
# 2008101501 Also display help if no option is given #
# 2008101501 Add warning/critical check to delay #
# 2011062200 Add perfdata #
# 2011122700 Checking Slave_IO_Running #
# 2012080300 Changed to use only one mysql query #
# 2012080301 Added warn and crit delay as optional args #
# 2012080302 Added standard -h option for syntax help #
# 2012080303 Added check for mandatory options passed in #
# 2012080304 Added error output from mysql #
# 2012080305 Changed from 'cut' to 'awk' (eliminate ws) #
# 2012111600 Do not show password in error output #
# 2013042800 Changed PATH to use existing PATH, too #
# 2013050800 Bugfix in PATH export #
# 2013092700 Bugfix in PATH export #
# 2013092701 Bugfix in getopts #
# 2013101600 Rewrite of threshold logic and handling #
# 2013101601 Optical clean up #
# 2013101602 Rewrite help output #
# 2013101700 Handle Slave IO in 'Connecting' state #
# 2013101701 Minor changes in output, handling UNKWNON situations now #
# 2013101702 Exit CRITICAL when Slave IO in Connecting state #
# 2013123000 Slave_SQL_Running also matched Slave_SQL_Running_State #
# 2015011600 Added 'moving' check to catch possible connection issues #
# 2015011900 Use its own threshold for replication moving check #
# 2019082200 Add support for mysql option file #
# 2019082201 Improve password security (remove from mysql cli) #
# 2019082202 Added socket parameter (-S) #
# 2019082203 Use default port 3306, makes -P optional #
# 2019082204 Fix moving subcheck, improve documentation #
#########################################################################
# Usage: ./check_mysql_slavestatus.sh (-o file|(-H dbhost [-P port]|-S socket) -u dbuser -p dbpass) [-s connection] [-w integer] [-c integer] [-m integer]
#########################################################################
help="\ncheck_mysql_slavestatus.sh (c) 2008-2019 GNU GPLv2 licence
Usage: $0 (-o file|(-H dbhost [-P port]|-S socket) -u username -p password) [-s connection] [-w integer] [-c integer] [-m]\n
Options:\n-o Path to option file containing connection settings (e.g. /home/nagios/.my.cnf). Note: If this option is used, -H, -u, -p parameters will become optional\n-H Hostname or IP of slave server\n-P MySQL Port of slave server (optional, defaults to 3306)\n-u Username of DB-user\n-p Password of DB-user\n-S database socket\n-s Connection name (optional, with multi-source replication)\n-w Replication delay in seconds for Warning status (optional)\n-c Replication delay in seconds for Critical status (optional)\n-m Threshold in seconds since when replication did not move (compares the slaves log position)\n
Attention: The DB-user you type in must have CLIENT REPLICATION rights on the DB-server. Example:\n\tGRANT REPLICATION CLIENT on *.* TO 'nagios'@'%' IDENTIFIED BY 'secret';"
STATE_OK=0 # define the exit code if status is OK
STATE_WARNING=1 # define the exit code if status is Warning (not really used)
STATE_CRITICAL=2 # define the exit code if status is Critical
STATE_UNKNOWN=3 # define the exit code if status is Unknown
export PATH=$PATH:/usr/local/bin:/usr/bin:/bin # Set path
crit="No" # what is the answer of MySQL Slave_SQL_Running for a Critical status?
ok="Yes" # what is the answer of MySQL Slave_SQL_Running for an OK status?
port="-P 3306" # on which tcp port is the target MySQL slave listening?
for cmd in mysql awk grep expr [
do
if ! `which ${cmd} &>/dev/null`
then
echo "UNKNOWN: This script requires the command '${cmd}' but it does not exist; please check if command exists and PATH is correct"
exit ${STATE_UNKNOWN}
fi
done
# Check for people who need help
#########################################################################
if [ "${1}" = "--help" -o "${#}" = "0" ];
then
echo -e "${help}";
exit 1;
fi
# Important given variables for the DB-Connect
#########################################################################
while getopts "H:P:u:p:S:s:w:c:o:m:h" Input;
do
case ${Input} in
H) host="-h ${OPTARG}";slavetarget=${OPTARG};;
P) port="-P ${OPTARG}";;
u) user="-u ${OPTARG}";;
p) password="${OPTARG}"; export MYSQL_PWD="${OPTARG}";;
S) socket="-S ${OPTARG}";;
s) connection=\"${OPTARG}\";;
w) warn_delay=${OPTARG};;
c) crit_delay=${OPTARG};;
o) optfile="--defaults-extra-file=${OPTARG}";;
m) moving=${OPTARG};;
h) echo -e "${help}"; exit 1;;
\?) echo "Wrong option given. Check help (-h, --help) for usage."
exit 1
;;
esac
done
# Check if we can write to tmp
#########################################################################
test -w /tmp && tmpfile="/tmp/mysql_slave_${slavetarget}_pos.txt"
# Connect to the DB server and check for informations
#########################################################################
# Check whether all required arguments were passed in (either option file or full connection settings)
if [[ -z "${optfile}" && -z "${host}" && -z "${socket}" ]]; then
echo -e "Missing required parameter(s)"; exit ${STATE_UNKNOWN}
elif [[ -n "${host}" && (-z "${user}" || -z "${password}") ]]; then
echo -e "Missing required parameter(s)"; exit ${STATE_UNKNOWN}
elif [[ -n "${socket}" && (-z "${user}" || -z "${password}") ]]; then
echo -e "Missing required parameter(s)"; exit ${STATE_UNKNOWN}
fi
# Connect to the DB server and store output in vars
if [[ -n $socket ]]; then
ConnectionResult=$(mysql ${optfile} ${socket} ${user} -e "show slave ${connection} status\G" 2>&1)
else
ConnectionResult=$(mysql ${optfile} ${host} ${port} ${user} -e "show slave ${connection} status\G" 2>&1)
fi
if [ -z "`echo "${ConnectionResult}" |grep Slave_IO_State`" ]; then
echo -e "CRITICAL: Unable to connect to server"
exit ${STATE_CRITICAL}
fi
check=`echo "${ConnectionResult}" |grep Slave_SQL_Running: | awk '{print $2}'`
checkio=`echo "${ConnectionResult}" |grep Slave_IO_Running: | awk '{print $2}'`
masterinfo=`echo "${ConnectionResult}" |grep Master_Host: | awk '{print $2}'`
delayinfo=`echo "${ConnectionResult}" |grep Seconds_Behind_Master: | awk '{print $2}'`
readpos=`echo "${ConnectionResult}" |grep Read_Master_Log_Pos: | awk '{print $2}'`
execpos=`echo "${ConnectionResult}" |grep Exec_Master_Log_Pos: | awk '{print $2}'`
# Output of different exit states
#########################################################################
if [ ${check} = "NULL" ]; then
echo "CRITICAL: Slave_SQL_Running is answering NULL"; exit ${STATE_CRITICAL};
fi
if [ ${check} = ${crit} ]; then
echo "CRITICAL: ${host}:${port} Slave_SQL_Running: ${check}"; exit ${STATE_CRITICAL};
fi
if [ ${checkio} = ${crit} ]; then
echo "CRITICAL: ${host} Slave_IO_Running: ${checkio}"; exit ${STATE_CRITICAL};
fi
if [ ${checkio} = "Connecting" ]; then
echo "CRITICAL: ${host} Slave_IO_Running: ${checkio}"; exit ${STATE_CRITICAL};
fi
if [ ${check} = ${ok} ] && [ ${checkio} = ${ok} ]; then
# Delay thresholds are set
if [[ -n ${warn_delay} ]] && [[ -n ${crit_delay} ]]; then
if ! [[ ${warn_delay} -gt 0 ]]; then echo "Warning threshold must be a valid integer greater than 0"; exit $STATE_UNKNOWN; fi
if ! [[ ${crit_delay} -gt 0 ]]; then echo "Warning threshold must be a valid integer greater than 0"; exit $STATE_UNKNOWN; fi
if [[ -z ${warn_delay} ]] || [[ -z ${crit_delay} ]]; then echo "Both warning and critical thresholds must be set"; exit $STATE_UNKNOWN; fi
if [[ ${warn_delay} -gt ${crit_delay} ]]; then echo "Warning threshold cannot be greater than critical"; exit $STATE_UNKNOWN; fi
if [[ ${delayinfo} -ge ${crit_delay} ]]
then echo "CRITICAL: Slave is ${delayinfo} seconds behind Master | delay=${delayinfo}s"; exit ${STATE_CRITICAL}
elif [[ ${delayinfo} -ge ${warn_delay} ]]
then echo "WARNING: Slave is ${delayinfo} seconds behind Master | delay=${delayinfo}s"; exit ${STATE_WARNING}
else
# Everything looks OK here but now let us check if the replication is moving
if [[ -n ${moving} ]] && [[ -n ${tmpfile} ]] && [[ $readpos -eq $execpos ]]
then
#echo "Debug: Read pos is $readpos - Exec pos is $execpos"
# Check if tmp file exists
curtime=`date +%s`
if [[ -w $tmpfile ]]
then
tmpfiletime=`date +%s -r $tmpfile`
if [[ `expr $curtime - $tmpfiletime` -gt ${moving} ]]
then
exectmp=`cat $tmpfile`
#echo "Debug: Exec pos in tmpfile is $exectmp"
if [[ $exectmp -eq $execpos ]]
then
# The value read from the tmp file and from db are the same. Replication hasnt moved!
echo "WARNING: Slave replication has not moved in ${moving} seconds. Manual check required."; exit ${STATE_WARNING}
else
# Replication has moved since the tmp file was written. Delete tmp file and output OK.
rm $tmpfile
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"; exit ${STATE_OK};
fi
else
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"; exit ${STATE_OK};
fi
else
echo "$execpos" > $tmpfile
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"; exit ${STATE_OK};
fi
else # Everything OK (no additional moving check)
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"; exit ${STATE_OK};
fi
fi
else
# Without delay thresholds
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"
exit ${STATE_OK};
fi
fi
echo "UNKNOWN: should never reach this part (Slave_SQL_Running is ${check}, Slave_IO_Running is ${checkio})"
exit ${STATE_UNKNOWN}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,192 @@
#!/bin/bash
set -eo pipefail
shopt -s nullglob
openssl req -x509 -sha256 -newkey rsa:2048 -keyout /var/lib/mysql/sql.key -out /var/lib/mysql/sql.crt -days 3650 -nodes -subj '/CN=mysql'
# if command starts with an option, prepend mysqld
if [ "${1:0:1}" = '-' ]; then
set -- mysqld "$@"
fi
# skip setup if they want an option that stops mysqld
wantHelp=
for arg; do
case "$arg" in
-'?'|--help|--print-defaults|-V|--version)
wantHelp=1
break
;;
esac
done
# usage: file_env VAR [DEFAULT]
# ie: file_env 'XYZ_DB_PASSWORD' 'example'
# (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of
# "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
exit 1
fi
local val="$def"
if [ "${!var:-}" ]; then
val="${!var}"
elif [ "${!fileVar:-}" ]; then
val="$(< "${!fileVar}")"
fi
export "$var"="$val"
unset "$fileVar"
}
_check_config() {
toRun=( "$@" --verbose --help --log-bin-index="$(mktemp -u)" )
if ! errors="$("${toRun[@]}" 2>&1 >/dev/null)"; then
cat >&2 <<-EOM
ERROR: mysqld failed while attempting to check config
command was: "${toRun[*]}"
$errors
EOM
exit 1
fi
}
# Fetch value from server config
# We use mysqld --verbose --help instead of my_print_defaults because the
# latter only show values present in config files, and not server defaults
_get_config() {
local conf="$1"; shift
"$@" --verbose --help --log-bin-index="$(mktemp -u)" 2>/dev/null | awk '$1 == "'"$conf"'" { print $2; exit }'
}
# allow the container to be started with `--user`
if [ "$1" = 'mysqld' -a -z "$wantHelp" -a "$(id -u)" = '0' ]; then
_check_config "$@"
DATADIR="$(_get_config 'datadir' "$@")"
mkdir -p "$DATADIR"
chown -R mysql:mysql "$DATADIR"
exec gosu mysql "$BASH_SOURCE" "$@"
fi
if [ "$1" = 'mysqld' -a -z "$wantHelp" ]; then
# still need to check config, container may have started with --user
_check_config "$@"
# Get config
DATADIR="$(_get_config 'datadir' "$@")"
if [ ! -d "$DATADIR/mysql" ]; then
file_env 'MYSQL_ROOT_PASSWORD'
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_ALLOW_EMPTY_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and password option is not specified '
echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD'
exit 1
fi
mkdir -p "$DATADIR"
echo 'Initializing database'
# "Other options are passed to mysqld." (so we pass all "mysqld" arguments directly here)
mysql_install_db --datadir="$DATADIR" --rpm "${@:2}"
echo 'Database initialized'
SOCKET="$(_get_config 'socket' "$@")"
"$@" --skip-networking --socket="${SOCKET}" &
pid="$!"
mysql=( mysql --protocol=socket -uroot -hlocalhost --socket="${SOCKET}" )
for i in {30..0}; do
if echo 'SELECT 1' | "${mysql[@]}" &> /dev/null; then
break
fi
echo 'MySQL init process in progress...'
sleep 1
done
if [ "$i" = 0 ]; then
echo >&2 'MySQL init process failed.'
exit 1
fi
if [ -z "$MYSQL_INITDB_SKIP_TZINFO" ]; then
# sed is for https://bugs.mysql.com/bug.php?id=20545
mysql_tzinfo_to_sql /usr/share/zoneinfo | sed 's/Local time zone must be set--see zic manual page/FCTY/' | "${mysql[@]}" mysql
fi
if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
export MYSQL_ROOT_PASSWORD="$(pwgen -1 32)"
echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD"
fi
rootCreate=
# default root to listen for connections from anywhere
file_env 'MYSQL_ROOT_HOST' '%'
if [ ! -z "$MYSQL_ROOT_HOST" -a "$MYSQL_ROOT_HOST" != 'localhost' ]; then
# no, we don't care if read finds a terminating character in this heredoc
# https://unix.stackexchange.com/questions/265149/why-is-set-o-errexit-breaking-this-read-heredoc-expression/265151#265151
read -r -d '' rootCreate <<-EOSQL || true
CREATE USER 'root'@'${MYSQL_ROOT_HOST}' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}' ;
GRANT ALL ON *.* TO 'root'@'${MYSQL_ROOT_HOST}' WITH GRANT OPTION ;
EOSQL
fi
"${mysql[@]}" <<-EOSQL
-- What's done in this file shouldn't be replicated
-- or products like mysql-fabric won't work
SET @@SESSION.SQL_LOG_BIN=0;
DELETE FROM mysql.user WHERE user NOT IN ('mysql.sys', 'mysqlxsys', 'root') OR host NOT IN ('localhost') ;
SET PASSWORD FOR 'root'@'localhost'=PASSWORD('${MYSQL_ROOT_PASSWORD}') ;
GRANT ALL ON *.* TO 'root'@'localhost' WITH GRANT OPTION ;
${rootCreate}
DROP DATABASE IF EXISTS test ;
FLUSH PRIVILEGES ;
EOSQL
if [ ! -z "$MYSQL_ROOT_PASSWORD" ]; then
mysql+=( -p"${MYSQL_ROOT_PASSWORD}" )
fi
file_env 'MYSQL_DATABASE'
if [ "$MYSQL_DATABASE" ]; then
echo "CREATE DATABASE IF NOT EXISTS \`$MYSQL_DATABASE\` ;" | "${mysql[@]}"
mysql+=( "$MYSQL_DATABASE" )
fi
file_env 'MYSQL_USER'
file_env 'MYSQL_PASSWORD'
if [ "$MYSQL_USER" -a "$MYSQL_PASSWORD" ]; then
echo "CREATE USER '$MYSQL_USER'@'%' IDENTIFIED BY '$MYSQL_PASSWORD' ;" | "${mysql[@]}"
if [ "$MYSQL_DATABASE" ]; then
echo "GRANT ALL ON \`$MYSQL_DATABASE\`.* TO '$MYSQL_USER'@'%' ;" | "${mysql[@]}"
fi
fi
echo
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
if ! kill -s TERM "$pid" || ! wait "$pid"; then
echo >&2 'MySQL init process failed.'
exit 1
fi
echo
echo 'MySQL init process done. Ready for start up.'
echo
fi
fi
exec "$@"

View file

@ -0,0 +1,130 @@
map $http_x_forwarded_proto $client_req_scheme_nc {
default $scheme;
https https;
}
server {
include /etc/nginx/conf.d/listen_ssl.active;
include /etc/nginx/conf.d/listen_plain.active;
include /etc/nginx/mime.types;
charset utf-8;
override_charset on;
ssl_certificate /etc/ssl/mail/cert.pem;
ssl_certificate_key /etc/ssl/mail/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
ssl_ecdh_curve X25519:X448:secp384r1:secp256k1;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "noindex, nofollow" always;
add_header X-XSS-Protection "1; mode=block" always;
fastcgi_hide_header X-Powered-By;
server_name NC_SUBD;
root /web/nextcloud/;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $client_req_scheme_nc://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $client_req_scheme_nc://$host/remote.php/dav;
}
location = /.well-known/webfinger {
return 301 $client_req_scheme_nc://$host/index.php/.well-known/webfinger;
}
location = /.well-known/nodeinfo {
return 301 $client_req_scheme_nc://$host/index.php/.well-known/nodeinfo;
}
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /web;
}
fastcgi_buffers 64 4K;
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
set_real_ip_from fc00::/7;
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12;
set_real_ip_from 192.168.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
location / {
rewrite ^ /index.php$uri;
}
location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
deny all;
}
location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|ocs-provider\/.+)\.php(?:$|\/) {
fastcgi_split_path_info ^(.+?\.php)(\/.*|)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
# Avoid sending the security headers twice
fastcgi_param modHeadersAvailable true;
# Enable pretty urls
fastcgi_param front_controller_active true;
fastcgi_pass phpfpm:9002;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
client_max_body_size 0;
fastcgi_read_timeout 1200;
}
location ~ ^\/(?:updater|ocs-provider)(?:$|\/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ {
try_files $uri /index.php$request_uri;
access_log off;
}
}

2
data/assets/nextcloud/occ Executable file
View file

@ -0,0 +1,2 @@
#!/bin/bash
docker exec -it -u www-data $(docker ps -f name=php-fpm-mailcow -q) php /web/nextcloud/occ ${@}

View file

@ -0,0 +1,4 @@
#!/bin/bash
echo DBPASS=$(openssl rand -base64 32 | tr -dc _A-Z-a-z-0-9)
echo DBROOT=$(openssl rand -base64 32 | tr -dc _A-Z-a-z-0-9)

View file

@ -0,0 +1,29 @@
<html>
<head>
<meta name="x-apple-disable-message-reformatting" />
<style>
body {
font-family: Helvetica, Arial, Sans-Serif;
}
/* mobile devices */
@media all and (max-width: 480px) {
.mob {
display: none;
}
}
</style>
</head>
<body>
Hello {{username2}},<br><br>
Somebody requested a new password for the {{hostname}} account associated with {{username}}.<br>
<small>Date of the password reset request: {{date}}</small><br><br>
You can reset your password by clicking the link below:<br>
<a href="{{link}}">{{link}}</a><br><br>
The link will be valid for the next {{token_lifetime}} minutes.<br><br>
If you did not request a new password, please ignore this email.<br>
</body>
</html>

View file

@ -0,0 +1,11 @@
Hello {{username2}},
Somebody requested a new password for the {{hostname}} account associated with {{username}}.
Date of the password reset request: {{date}}
You can reset your password by clicking the link below:
{{link}}
The link will be valid for the next {{token_lifetime}} minutes.
If you did not request a new password, please ignore this email.

View file

@ -0,0 +1,69 @@
<html>
<head>
<meta name="x-apple-disable-message-reformatting" />
<style>
body {
font-family: Helvetica, Arial, Sans-Serif;
}
table {
border-collapse: collapse;
width: 100%;
margin-bottom: 20px;
}
th, td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd;
vertical-align: top;
}
td.fixed {
white-space: nowrap;
}
th {
background-color: #56B04C;
color: white;
}
tr:nth-child(even) {
background-color: #f2f2f2;
}
/* mobile devices */
@media all and (max-width: 480px) {
.mob {
display: none;
}
}
</style>
</head>
<body>
<p>Hi {{username}}!<br>
{% if counter == 1 %}
There is 1 new message waiting in quarantine:<br>
{% else %}
There are {{counter}} new messages waiting in quarantine:<br>
{% endif %}
<table>
<tr><th>Subject</th><th>Sender</th><th class="mob">Score</th><th class="mob">Action</th><th class="mob">Arrived on</th>{% if quarantine_acl == 1 %}<th>Actions</th>{% endif %}</tr>
{% for line in meta|reverse %}
<tr>
<td>{{ line.subject|e }}</td>
<td>{{ line.sender|e }}</td>
<td class="mob">{{ line.score }}</td>
{% if line.action == "reject" %}
<td class="mob">Rejected</td>
{% else %}
<td class="mob">Sent to Junk folder</td>
{% endif %}
<td class="mob">{{ line.created }}</td>
{% if quarantine_acl == 1 %}
{% if line.action == "reject" %}
<td class="fixed"><a href="https://{{ hostname }}/qhandler/release/{{ line.qhash }}">Release to inbox</a> | <a href="https://{{ hostname }}/qhandler/delete/{{ line.qhash }}">delete</a></td>
{% else %}
<td class="fixed"><a href="https://{{ hostname }}/qhandler/release/{{ line.qhash }}">Send copy to inbox</a> | <a href="https://{{ hostname }}/qhandler/delete/{{ line.qhash }}">delete</a></td>
{% endif %}
{% endif %}
</tr>
{% endfor %}
</table>
</p>
</body>
</html>

73
data/assets/templates/quota.tpl Executable file
View file

@ -0,0 +1,73 @@
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<style>
body {
font-family: Calibri, Arial, Verdana;
}
table {
border: 0;
border-collapse: collapse;
<!--[if mso]>
border-spacing: 0px;
table-layout: fixed;
<![endif]-->
}
tr {
display: flex;
}
#progressbar {
color: #000;
background-color: #f1f1f1;
width: 100%;
}
{% if (percent >= 95) %}
#progressbar {
color: #fff;
background-color: #FF0000;
text-align: center;
width: {{percent}}%;
}
{% elif (percent < 95) and (percent >= 80) %}
#progressbar {
color: #fff;
background-color: #FF8C00;
text-align: center;
width: {{percent}}%;
}
{% else %}
#progressbar {
color: #fff;
background-color: #00B000;
text-align: center;
width: {{percent}}%;
}
{% endif %}
#graybar {
background-color: #D8D8D8;
width: {{100 - percent}}%;
}
a:link, a:visited {
color: #858585;
text-decoration: none;
}
a:hover, a:active {
color: #4a81bf;
}
</style>
</head>
<body>
<table>
<tr>
<td colspan="2">
Hi {{username}}!<br /><br />
Your mailbox is now {{percent}}% full, please consider deleting old messages to still be able to receive new mails in the future.<br /><br />
</td>
</tr>
<tr>
<td id="progressbar">{{percent}}%</td>
<td id="graybar"></td>
</tr>
</table>
</body>
</html>

47
data/conf/clamav/clamd.conf Executable file
View file

@ -0,0 +1,47 @@
#Debug true
#LogFile /dev/null
LogTime yes
LogClean yes
ExtendedDetectionInfo yes
PidFile /run/clamav/clamd.pid
OfficialDatabaseOnly no
LocalSocket /run/clamav/clamd.sock
TCPSocket 3310
StreamMaxLength 25M
MaxThreads 10
ReadTimeout 10
CommandReadTimeout 3
SendBufTimeout 200
MaxQueue 80
IdleTimeout 20
SelfCheck 3600
User clamav
Foreground yes
DetectPUA yes
# See https://github.com/vrtadmin/clamav-faq/blob/master/faq/faq-pua.md
#ExcludePUA NetTool
#ExcludePUA PWTool
#IncludePUA Spy
#IncludePUA Scanner
#IncludePUA RAT
HeuristicAlerts yes
ScanOLE2 yes
AlertOLE2Macros no
ScanPDF yes
ScanSWF yes
ScanXMLDOCS yes
ScanHWP3 yes
ScanMail yes
PhishingSignatures no
PhishingScanURLs no
HeuristicScanPrecedence yes
ScanHTML yes
ScanArchive yes
MaxScanSize 50M
MaxFileSize 25M
MaxRecursion 5
MaxFiles 200
Bytecode yes
BytecodeSecurity TrustSigned
BytecodeTimeout 1000
ConcurrentDatabaseReload no

Some files were not shown because too many files have changed in this diff Show more