Big Data

Hortonworks and Apache Hadoop, the open-source software for reliable, scalable and distributed computing

Why choose Cloudera ?

Cloudera is known worldwide for its ability to integrate Big Data features, available in the Apache Hadoop ecosystem, which empower the companies to store, manage and analyse a vast amounts of data quickly and reliably on commodity hardware. Apache Hadoop and Cloudera offer us an open-source platform for distributed storage and distributed processing on computer clusters. Hadoop provides a distributed file system with a high-throughput data access, a framework of job scheduling, cluster resource management and features for parallel processing of large data sets. On the other hand, the Big Data ecosystem contains additional features for managing and monitoring Hadoop clusters, data serialization, scalable databases, data warehouse infrastructure, machine learning, data mining and data flow programming frameworks. These technologies have changed the way as the businesses are managed. Nowadays, more than half of the Fortune 50 have adopted Hadoop.


We have experience providing Big Data solutions for our clients using the Apache Hadoop ecosystem and the Hortonworks software. Since 2012 we have been working on Big Data solutions based on Hadoop, MongoDB, Phoenix, HBase, Ambari and others and nowadays, we are considered experts in the Big Data business. We are able to design tailored dashboards, business reports and prediction tools. With Cloudera, Hadoop and ZYLK, companies can easily access Big Data technologies to get useful information and improve their competitiveness. We have implemented Big Data solutions in businesses such as telecommunications, energy and public administrations, with remarkable results for our clients.

Tell us about your project


Blog - Last entries


ZYLK es reconocido como “Cloudera Silver Partner”

En ZYLK somos pioneros y expertos en tecnologías Big Data desde 2012. Desde entonces brindamos soluciones a nuestros clientes basadas en el entorno de Apache Hadoop. Desarrollamos proyectos de valor clave, servicios de consultoría y soporte a clientes con estrategias de negocio de la analítica de datos.

Read More

Montando un laboratorio con apache Ozone

Llevo bastante tiempo sin escribir nada técnico relacionado con el mundo del la analítica de datos y el bigdata. En zylk hemos seguido trabajando con el ecosistema de apache hadoop (hive, yarn, hdfs etc..) y como siempre también hemos estado siguiendo algunos proyectos de la fundación apache que nos parecen interesante. Entre ellos hay tres que nos gustan especialmente y son: Apache Ozone Apache

Read More

Creando mapas personalizados para el análisis de datos

Una de las cosas que se suelen hacer cuando se está haciendo un análisis avanzado de datos es representar los mismos, tanto desde el punto de vista de la fase exploratoria como desde el punto de vista del uso de los resultados de los modelos aplicados. Dentro de este contexto hay una librería/proyteco que es muy interesante y se se está usando bastante. Este proyecto es plotly, es un proyecto bastante más interesante que una mera librería...

Read More

Blog - Most visited


Simple Kibana dashboard for monitoring Alfresco Logs

Some days ago I wrote a post about how to set up a basic Kibana dashboard in Alfresco Enterprise with JMX metrics, from a logstash JMX input. Today I'm gonna add some simple configuration for creating a dashboard for Alfresco logs. The architecture for ELK is the same of the previous post, with logstash running in your Alfresco instance and a dedicated Elastic Search and...

Read More

See you in Alfresco Devcon 2018

Last week, it was published the Alfresco Devcon 2018 conference program, that will celebrate in Lisbon next January. Many Alfresco experts of the community, customers, partners, and employees will participate on this fantastic event, around Alfresco related technologies. The program looks really interesting and trendy, with topics such as production-ready Docker stacks, Alfresco deployments using Kubernetes, AWS use cases, SDK 3.0 setups, Alfresco Development Framework (ADF)...

Read More

Kibana dashboard for monitoring Alfresco JMX metrics

This weekend I read in Elastic blog that Mr. Robot uses Kibana for monitoring the Dark Army, so I decided to write a post about a recent monitoring project for representing some basic (but interesting) Alfresco JMX metrics in a clustered environment, in which I was involved last week. As you probably know, Kibana is a useful tool of the ELK stack, composed by Elastic Search as indexing backend,...

Read More