We did not have time in the last call to discuss this:
There are use cases where we need to share state between a request and a response microservice. Now we have already 2 cases, so I suggest to define a common method to achieve this. The same approach could also be used to access the common config, e.g. if you need to know the proxy configuration (such as a backend entityid) in a microservice.
A simple mechanism would be to use a module-level variable as singleton:
=====================
shared_state.py
state = {}
———
plugins/microservices/a.py
from shared_state import state # import executes only once
…
state[’a’] = 'foo'
———
plugins/microservices/b.py
from shared_state import state
whatever(state['a‘])
=====================
I thing that for just passing request status to response microservices and passing config data around this should be good enough. There are several alternatives, like the Borg pattern, which I find harder to read.
- Rainer
Hi all,
I created a simple LDAP client on top of ldap3, as our
ldap_store_attributes MS is.
https://github.com/peppelinux/pyLDAP
I think that would be better to handle all the configuration parameters
subdivided as API methods are implemented, like
https://github.com/peppelinux/pyLDAP/blob/master/settings.py.example#L4
At this moment we maps them manually from configuration to ldap_store MS,
that approach instead would permit us somethig like:
https://github.com/peppelinux/pyLDAP/blob/master/client.py#L32https://github.com/peppelinux/pyLDAP/blob/master/client.py#L41
things will come as they are from yaml configuration, without any
additional mapping into MS code.
Another topic is the possibility to decouple standalone clients from MS, as
guidance principle.
The tool I showned can fetch data from multiple sources using a single
configuration.
I can be also apply embedded rewrite rules. Doing it in the client and
using that client decoupled from MS code would be better to debug and
app/code reuse. In the MS code we would only include calls to the client's
API, to get them to work and fetch from them what needed. They could also
scale up in a multiprocessing asset more easily this way. Multiple clients,
with same methods and similar API could be parallelized for a faster data
aggregation and account linking. The same methods could be used for a WS
service with SOAP client, noSQL client and others.
these thought would be also be linked to our latest comments about the
"global configuration visible into MS" and "a shareble context into MS", if
these ideas could help in a wider approach with potential benefits for the
future.
I share as it is,
see you back soon
Hi folks,
I'm wondering to develop a microservice on top of asyncio for manage
multiple connections to many LDAP servers (or any kind).
I think that this would be the best solution for a performant, flexible and
highly customizable account linking.
What do you think?
https://docs.python.org/3/library/asyncio.html
--
____________________
Dott. Giuseppe De Marco
CENTRO ICT DI ATENEO
University of Calabria
87036 Rende (CS) - Italy
Phone: +39 0984 496961
e-mail: giuseppe.demarco at unical.it
If would like to discuss a logging feature that I would like to see in Satosa. I proposed with PR 237 to add a log filter that would enhance satosa's logging capabilities. Ivan rejected it for the (in general good) reason that the proxy should log everything and log processing should be external.
I agree in general, but there are bits where I still think that they would be useful in satosa. These are:
1. In production environments it is unlikely to push the full set of debug information to a logging service. However, it might be useful to get debug level data on certain selections. Usually that would be based in IP addresses, which should not be too complicated to implement.
2. In a dev environment one is easily inundated with debug data. Shibboleth has a nice feature providing logging levels for certain aspects, such as XML tooling, SAMl message de-/encoding. I find this capability quite useful, because in my dev environment I do not have an elaborate log processor. Attribute configuration in satosa could be helped by selected messages.
If done properly, this change has following impact on all modules that instantiate a logger:
1. refactor the satosa_logger wrapper back to the native logger with similar signature
2. add a log filter after each get_logger()
The logfilter (logging.Filter) is orthogonal to structured logging, or may even help to improve it.
see: https://github.com/IdentityPython/SATOSA/pull/237 <https://github.com/IdentityPython/SATOSA/pull/237>
Cheers, Rainer
Hi,
As already mentioned to Ivan during our previous meeting I do not use
docker but a bootstrap procedure on top of virtualenv.
In production I use uwsgi instead of gunicorn. A configuration example Is
here:
https://github.com/peppelinux/Satosa-saml2saml/tree/master/example/uwsgi_se…
If It could be usefull to the community we could serve this examples
directly into SATOSA. With uwsgi I have a lot of professional features like
http statistics server in json format, triggers that reload workers on file
change (or simply touch) and many others that Is not yet included there.
I share as It come, if usefull I can do a style and comments clean up
--
------------------------------------------------------------------------------------------------------------------
Il banner è generato automaticamente dal servizio di posta elettronica
dell'Università della Calabria
<http://www.unical.it/5x1000>