Hi all,
On Wed, 4 Jul 2018 at 13:01, Niels van Dijk <niels.vandijk at surfnet.nl> wrote:
Hi all,
No worries (yet), this is not about issues we have uncovered.
We have the opportunity however to have a team from the GEANT project review the code of
SaToSa. We already had them do a review of the InAcademia service itself, now they can
look at the code.
I think that would be great! Is there a process for it? What is
checked; how; and what for?
Their question is however, which areas would be of
most interest to look at, as just starting at line 1 is probably not a good idea ;)
Could you also express why these code areas are most sensitive?
My initial guess would be the code that handles incoming OIDC and SAML is most critical
(so the backends). Including the bits that do validation of these requests.
Next the code that handles business logic of interpreting the internal state and make the
responses out of that
Then the frontends
Does that make sense? Should we also include looking into the libraries deeply?
SATOSA only connects frontends to backend (and vice versa), with some
intermediate logic that defines the internal representation of the
information it has collected, plus the metadata and configuration. The
"heavy lifting" is left to the libraries that actually implement the
standards. IMHO, the tricky bits are there; in the libs. So, my vote
would be yes, but lets start with the shell (SATOSA) and see if we can
move towards the core (libs) later.
In addition are you aware of additional reviews that
were performed? If so we would be really interested to learn about these.
Ofcause we will share the finding in a confidential way. By the way, does idpy have some
contingency rules about that already?
A document about how to report security vulnerabilities/issues and how
we should handle these reports is being formed.
Cheers,
--
Ivan c00kiemon5ter Kanakarakis >:3