Together, your Internet even better

Articles tagged with: DC

Jaguar Network launches the construction of a 3ᵉ DC in Lyon

on Friday, 06 November 2020 Posted in Archives Rezopole

Jaguar Network launches the construction of a 3ᵉ DC in Lyon

After acquiring DCforData and its datacenter in Limonest in 2018, Jaguar Network inaugurated its second datacenter called "Rock" a year later in the 8ᵉ district of Lyon. The operator and host, a B2B subsidiary of the Iliad Group, is today launching the construction of a third site.

 

This new facility will meet the exponential demand for data hosting in the Auvergne-Rhône-Alpes region, and thus preserve this sovereignty in the immediate vicinity. The aim is to immediately offer new, complementary services for pioneering sectors such as Industry 4.0 and e-Health.

 

The hosting architectures now spread over three active sites will make it possible to address requests from all over the Lyon metropolitan area and its region, while guaranteeing the diversification and security of the power supply.

Interconnected with leading international operators as well as national and regional operators, this new very high-speed communication node will support new uses and the city's transformation by optimizing connectivity.

Specializing in smartcity and the challenges of AI and big data, this new datacenter will be designed to create new partnerships with the ecosystem and open up new opportunities for a rapidly changing employment pool.

 

An announcement confirming the Iliad Group's investment in fiber optics as it aims to connect 100% of the companies in the AuRA region by 2024.

For Jaguar Network, this is the affirmation of its installation in France's second-largest economic region in connection with its historical market of SMEs, ETIs and large accounts. This prefigures the forthcoming arrival of the Iliad Group in the corporate market.

 

 

 Read the article

 

Source : Datacenter Magazine

 

 

 

 

Heat wave: why French DCs are holding up

on Thursday, 01 August 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Heat wave: why French DCs are holding up

Heat episodes are not taken lightly by data center operators. In France, "we have gone from 40 degrees to 46 degrees in a few years. We have met the specifications of Spain," says Marie Chabanon, Technical Director of DATA4 Group.

 

In order to counter any heat stroke, the datacenters' resistance to temperatures has increased " The great fear is the domino effect [...] If all or part of the cold infrastructure has problems, it affects the rest of the equipment. And if the refrigeration unit stops, it's the worst thing that can happen to us with the complete power outage," added Fabrice Coquio, Interxion's Managing Director. A risk also linked to the quality of RTE or Enedis' electricity distribution. "We must anticipate a risk of electrical loss or incident," explains Marie Chabanon.

 

But data center operators have a secret boot to fight this domino effect. "Data center electrical systems are built to be 100% operational. However, this is never the case. The consequence is that in the event of a load, such as a higher cold demand, we have unallocated power that we can use," explains Fabien Gautier of Equinix. This is called capacity redundancy.

 

Especially since the densification of computing power per unit of space in recent years, with the democratization of virtualization, has led to more consumption and more heat. "With 14 or 15 kvA berries, we cause hot spots, which are more sensitive to heat waves," explains Fabien Gautier. The work of urbanizing the IT architecture deployed in the rooms is therefore essential. "Our work is therefore the urbanization of the rooms. If they were completed on the fly, that can be a problem," he adds.

This involves, among other things, load balancing. "Our data centers are designated with redundancies and a 50% load rate. The backup machines will be used to provide additional power" in the event of a heat wave, says Marie Chabanon. Nevertheless, it must be anticipated. "We must ensure that backup systems are ready to be operational, through maintenance and control actions on backup equipment."

 

The protection of data centers against heat also requires the installation of curative systems. "We installed water spray systems to water the roof equipment with water that is not too cold," says Fabrice Coquio.

And to be prepared for any eventuality in the early evening, the schedule of the technicians present on site has been modified. It is also necessary to warn customers so that they are careful.

 

Recent advances in hardware strength and data center design have made it possible to increase the temperatures in server storage rooms. "The idea is that the lower the PUE (Power Usage Effectiveness), the better it performs. Ten years ago, we used to make datacenters where it was difficult to achieve a PUE of 1.6. Today we are at 1.2 and we are getting closer to 1, which represents 20% savings by playing on the temperature and energy performance of the new equipment," says Marie Chabanon. As a result, the cooling system now focuses on machines with forced air. There is no longer any need to refrigerate entire rooms.

"We are seeing an evolution in the design of indoor temperature according to the recommendations of the Ashrae (American Society of Heating and Ventilating Engineers). The idea is to work well with much higher temperature ranges. We have gone from 20 to 22 degrees to 18 to 27 degrees," she adds. Since 2011, these standards have been raised: they recommend blowing at 26 degrees on the front panel on indoor equipment. "The humidity level was also modified [...] In 2008, it was between 40 and 60%. It is now 70%," says Fabrice Coquio.

 

This will limit cooling costs without affecting the resistance of the installations. A critical point in hot weather.

 

 

 

 

 Read the article

 

Source : ZDNet

 

 

 

 

The Data Center Continuum

on Tuesday, 25 June 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

The Data Center Continuum

The visionary trend of the 2010's massively positioned data center surfaces in Hyperscale DCs, ideally located in areas close to the Arctic Circle. At the time, only the issue of systemic risks seemed to be able to slow down this development.

 

But today the reality is no longer the same. Indeed, a continuum model has replaced this vision of hyper-concentration of surfaces, which can be summarized in 6 levels.

  • Hyperscal Data Centers are still attractive for mass storage and non-transactional processing. Their objective is to bring the best production cost, by positioning a large area pooling where land and energy are cheap.
  • Hub Data Centers are mainly located in Frankfurt, London, Amsterdam and Paris in Europe. These areas concentrate large data centers and benefit from fast interconnection between them. These areas over-attract operators because interconnection takes precedence over the potential of the local market.
  • Regional Data Centers, located in all other major cities, address this time the local economic potential, with cloud players for companies or hosting providers acting as first level access to DC Hubs.
  • "5G" Data Centers will be located as close as possible to urban areas in order to meet the need for latency required by population uses.
  • Micro-Data Centers will bring low latency during a high concentration of use (a stadium, a factory).
  • Pico-Data Centers will address the use of the individual, thus bringing a minimum latency and especially a management of private data.

 

Despite different sizes, the first three levels of these data centers follow the same design principles. Except that Hyperscal Data Centers are often single users. It is therefore possible for them to position more restrictive design choices than in shared apartments.

The last three levels belong to the Edge universe and aim to position the DC space as close as possible to usage. However, these levels have different design principles.

The installation will be done in an industrial way for micro and pico-Data Centers. The main issues will be more related to physical protection or maintenance/operation of these infrastructures.

The "5G" Data Centers bring a new deal. Indeed, they have all the characteristics of a "small" DC but must be implemented in complex environments. They are subject to numerous safety and standards compliance constraints being located in urban areas. However, the greatest complexity lies in the lack of space to deploy the technical packages.

 

 

 Read the article

 

Source : Global Security Mag

 

 

 

 

 

 

Peering and central DCs: essential?

on Friday, 14 June 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Peering and central DCs: essential?

Central data centres are connectivity relays, real marketplaces. They bring together almost all the players in the digital value chain. The challenge is therefore to know how to identify them, to be able to recognize them in order to open a PoP (Point Of Presence).

 

There are three types of data centers: hyperscale, edge and core. Usually organized in a loop, each one has a very specific role in the organization of an IT architecture. It is very common to see players hosting their application in a hyperscale, deploying their IT in an edge and ensuring an optimized network path by creating peering links in a core.

 

Central DCs have a very specific importance and are therefore becoming real performance levers that determine many infrastructure choices.

But how to identify them? The easiest way is to consult the referencing databases of network players such as PEERING DB and to search for the data centre with the largest number of members. If they have differentiating elements such as the number of members and network ports available, cushioned network equipment or an extremely wide choice of actors, it is in peering that their main attraction lies.

 

In a central DC everyone is on an equal footing: everyone shares data via a physical connection from point A to point B. Regardless of the nature of the interconnection: peering, direct interconnection or transit, I know that everyone is within cable distance of my rack. I will therefore benefit from clustering effect. And the effect is virtuous, the more actors a central data center brings together, the more interconnection there is, the cheaper it is.

 

Peering is a strong trend that is becoming essential. A study published by Arcep in 2017 related to traffic measurement among ISPs in France indicates that the data exchanged on the territory are distributed in this way: 50% for transit, 46% for private peering and 4% for public peering. The same ratios were observed by the Journal du Net in one of its central data centers. The share of transit decreases very significantly between 2017 and 2018. Public peering is growing and private peering is increasing very significantly. Three main consequences follow from this dynamic: content players will get as close as possible to end customers by bypassing hosters and forwarders in the short and medium term, freight forwarders seeing their business decline will try to recover the margins they are losing on the CDN link, and finally ISPs will try to get closer to the end customer themselves by including content in their offers.

 

Several good practices deserve to be shared to move to live traffic. First, start with the application. Before choosing where to host your IT, it is necessary to consider the nature of the IT. Depending on the answer, you have to organize your architecture. The challenge is to create network accesses that facilitate the user experience and reduce costs. Depending on their priority and the level of security required, the applications will therefore be divided between core, edge and hyperscale.

Secondly, how to bring the user closer to these applications? The alternative is quite simple: either use peering or direct interconnections, or put the application locally in its data center and set up a private network link to the end user.

 

The meaning of the story seems to be moving towards a transformation of the IT agent into a buyer. IT managers are now able to organize these outsourcing choices in these three types of data centers. Business choices therefore become business choices.

 

 Read the article

 

Source : Journal du Net

 

 

 

 

DC under the influence of the Cloud

on Thursday, 02 May 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

DC under the influence of the Cloud

Published by Axians, the DataCenter 2018-2021 study, Which evolutions for your IT infrastructures, indicates that data centres will be managed mainly in a private cloud mode by 2021. More than half will need to integrate a data management model based on a hybrid cloud model.

When asked in 2018, nearly 80% of CIOs say they mainly manage On Premise infrastructures and nearly one-third manage private cloud infrastructures. By 2021, this trend will intensify as, contrary to popular belief, internal data centres will not disappear. Indeed, it is the technologies implemented that will evolve and allow these data centers to operate in Cloud mode.

Although more than half of companies are aware of the need to manage data in a public-private cloud mode, only 12% of respondents have a project to implement a hybrid cloud solution.

According to this study, the 4 main current challenges for IT Departments are security (73%), cost control (66%), regulatory compliance (60%) and the digitalization of business lines (52%). The technology that will have the greatest impact on data centers within three years will be cybersecurity in the face of business service automation. On the software side, VMware leads ahead of Microsoft and Red Hat. While on the cloud operators side, Microsoft is cited first ahead of OVH and AWS.

For the majority of the CIOs interviewed, they remain the technical and operational guarantors of the infrastructure or strategy and innovation. However, new roles are emerging such as private cloud resource providers driven by SLA or hybrid cloud operators.

 Lire l'article

 

Source : Informatique News

 

 

 

 






Increase in expenses dedicated to DataCenters

on Wednesday, 17 April 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Increase in expenses dedicated to DataCenters

Driven by a booming cloud infrastructure market, hardware and software spending in DataCenters increased by 17% in 2018. A global market dominated by Dell EMC followed by Cisco, Hewlett Packard Enterprise and Huawei. Investments driven by a "growing demand" for public cloud services and the need for "ever richer" configuration according to Synergy Research Group, which publishes these figures. As a result, the average selling price of enterprise servers has skyrocketed.


In more detail, spending on infrastructure purchases for the public cloud increased by 30% compared to 13% for those directed at equipping enterprise data centers. "Cloud services revenues continue to grow by nearly 50% per year," said John Dinsdale, analyst at Synregy Research Group. "SaaS and e-commerce revenues are each increasing by about 30%. All these factors contribute to a significant increase in spending on public cloud infrastructure," he adds.
The public cloud market is dominated by the MDGs, which account for the largest share of cumulative revenue. On the brand side, Dell EMC is ahead of Cisco, HPE and Huawei. Dell EMC is also a leader in the private cloud market, followed by Microsoft, HPE and Cisco. These four providers are the leaders in the non-Cloud Data Center market, but in a different order.


Total revenue from Data Center equipment, including both cloud and non-cloud hardware and software, is $150 billion in 2018, the analyst said. The Data Center infrastructure market is 96% composed of servers, operating systems, storage, networking and software. Network security and management software represent the rest.
By segment, Dell EMC leads in terms of server and storage revenue. Cisco, on the other hand, overlooks the network segment. Then there are Microsoft, HPE, VMware, Lenovo, Inspur, NetApp and Huawei, which recorded the strongest growth in one year.


"We are also seeing relatively strong growth in infrastructure spending in enterprise data centers, with more complex workloads, hybrid cloud requirements, increased server functionality and higher component costs being the main drivers," concludes Dinsdale.

 

 Read the article

 

Source : Le Monde Informatique

 

 

 

 

DC faiilures : caused by the network ?

on Wednesday, 17 April 2019 Posted in Archives Rezopole

DC faiilures : caused by the network ?

While power outages are a frequent cause of data center outages, they are no longer the only ones. Indeed, IT system failures and network errors are causing more and more failures. That's why the Uptime Institute looked at known outages to find out what caused unplanned service interruptions. To do this, the company has analyzed 162 service interruptions reported in traditional and social media over the past three years.

27 outages were reported in the media in 2016, 57 in 2017 and 78 in 2018. "Service outages are increasingly making headlines in the media," said Andy Lawrence, the Institute's Executive Director of Research. This does not necessarily mean that the number of failures is skyrocketing, but rather that downtime is attracting more and more attention. "It is clear that for users, the impact of outages is certainly more damaging today," he adds.

The study revealed that in global outages, network and IT system problems are more often blamed than those related to power supply. This is explained by the fact that power supply systems are more reliable than in the past and that there are fewer power outages in data centers.

At the same time, the increasing complexity of IT environments is causing a growing number of IT and network problems. "Data is now dispersed in multiple locations, with critical dependencies on the network, on how applications are architected and on how databases replicate each other. It is a very complex system, and it now takes fewer events to disrupt its operation," said Todd Trader, Vice President of Optimization and IT Strategy at the Uptime Institute.

This trend is all the more pronounced when comparing causes from one year to the next. 28% of outages were due to power supply problems in 2017 compared to 11% the following year. IT system failures remained relatively constant: 32% in 2017 and 35% in 2018. Outages due to network problems have increased significantly, from 19% in 2017 to 32% in 2018. "Things are linked not to one or two sites but to three or four or more sites, or even more, The network plays an increasingly important role in computer resilience," says Todd Traver.

In order to be able to distinguish an interruption that can threaten the activity of a company from a just disturbing failure, the Uptime Institute has developed an evaluation grid with a scale of 5 levels:

  • Level 1: refers to a negligible stop. The failure is recordable but there is little or no obvious impact on services and no service interruption.
  • Level 2: refers to a minimal interruption of service. Services are disrupted, but the effect on users, customers or reputation is minimal.
  • Level 3: refers to a service interruption that is significant to the company. These are interruptions in customer or user service, most often of limited scope, duration or impact. The financial impact is minimal or non-existent but there is some impact on reputation or compliance.
  • Level 4: concerns a serious operational or service failure leading to service disruption and/or operations involving financial loss, non-compliance, reputation damage and possibly even security issues with possible loss of customers.
  • Level 5: Describes a critical failure for the company or mission, resulting in a major and damaging interruption of services and/or operations, involving significant financial loss, security issues, non-compliance, customer losses and reputation damage.

 

This analysis was further developed by researchers who specifically identified the origin of data center failures.

The most common reasons for failures when the network is down:

  • fiber cuts outside the datacenter and insufficient number of routing alternatives
  • intermittent failure of the main switches and absence of secondary routers
  • major switch failure without backup
  • incorrect traffic configuration during maintenance
  • incorrect configuration of routers and networks defined by softwar
  • failure to power individual unsaved components such as switches and routers


For IT, the most common causes are:

  • poorly managed upgrade
  • failure and subsequent data corruption of a large number of disks or SAN storage systems
  • synchronization failure or programming errors in the load balancing or traffic management system
  • poorly programmed failure / synchronization or disaster recovery system
  • power loss to unsaved individual components


When the power supply fails, the reasons for the failures are:

  • lightning causes overvoltages and power outages
  • intermittent failures with transfer switches and inability to start generators or transfer to a second datacenter
  • inverter failures and lack of transfer to secondary systems
  • the supplier is unable to deliver the necessary power with subsequent failure of the generator or inverter
  • damage to computer equipment caused by overvoltages

 

"In general, companies should pay more attention to the resilience of data centers. They need to know their architectures, to understand all the interdependencies, to identify the reasons for failures, to plan solutions in case of failure. However, this last aspect is often neglected," adds Todd Traver.

 

 Read the article

 

Source : Le Monde Informatique

 

 

 

 

Designing DCs for tomorrow today

on Wednesday, 27 February 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Designing DCs for tomorrow today

How can we succeed in building IT infrastructures in a sustainable and perennial way for the next 20 years? What are the important elements to consider during the design phase?

Although it may seem easy to build data centers, it is a rapidly evolving industry. Indeed, today's rooms are becoming denser, servers consume more and more energy and are heavier. Modularity concepts are shaking up the market every month, the ranges are evolving rapidly to better meet users' needs....

This is why adaptability and modularity must be part of the solutions to these problems from the design phases. For example, choose modular cooling and electrical solutions, increase power and load during maintenance, design large equipment by oversizing it.
It can also be very useful to implement new Agile working methods. It is therefore essential to be flexible and adapt to these changes that can affect the project in a sustainable way.
Modularity is also an essential point during the design phase, especially if you choose an atypical location to set up your data center. However, legal or regulatory aspects may run counter to this modularity. It is therefore necessary to address these problems as soon as possible, as they often have incompressible deadlines...

 

 

 Read the article

 

Source : Le Monde Informatique

 

 

 

 

Development of French DCs

on Wednesday, 20 February 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

Development of French DCs

Interviewed by LeMagIT, Olivier Micheli notes that French data centers are finally attracting international cloud players and that they are expanding geographically in order to reduce latency.

Olivier Micheli, who is both CEO of Data4 Group and President of the France Datacenter Association, estimates that there are 200 large datacenters in France covering up to 10,000 m². The capital with the largest number of people because Paris is a European interconnection node.
There are between 3000 and 5000 private computer rooms of varying size and power across the country.
Beyond the desire of companies to control their equipment, the importance of ever lower latency is increasingly important in local economic activities and the development of smart cities.
According to Olivier Micheli, the market is moving towards data centres whose size is proportional to the size of the economic activity nearby.

After a slow period between 2012 and 2015, the French data center market has caught up. France is now in fourth place in Europe, tied with Ireland. There are several reasons for this: the opportunity for international companies to reach 67 million people from locally hosted IT resources, the geostrategic importance of Marseille and also the government's efforts to create favourable conditions for the development of these datacenters.
This finally allows France to align itself with the United Kingdom, Germany and the Netherlands.

The customers of these data centers are 70% of public cloud players such as Amazon, AWS but also publishers such as Salesforce. User companies want a lot of support.

The first issue for datacenters is, according to Olivier Micheli, connectivity. Indeed, companies now want to benefit from an offshore computer room in order to redistribute this data to users and Internet actors.
The second challenge is that of intelligent buildings and to achieve 100% renewable energy by using, for example, Free Cooling.

 

 

 Read the article

 

Source : LeMagIT

 

 

 

 

 

 

 

The largest datacenter in Lyon

on Thursday, 24 January 2019 Posted in Archives Rezopole, Archives GrenoblIX, Archives LyonIX

The largest datacenter in Lyon

Inaugurated on January 15 by the national operator Jaguar Network, the last datacenter of the Lyon metropolitan area was built within the Biopark of the 8th arrondissement. Known as "Rock", this datacenter is also the largest in Lyon with more than 4,000 m² and 800 computer bays.
This project, supported by the Auvergne-Rhône-Alpes region, now hosts IT projects in the health and industry 4.0 sectors after only twelve months of construction.

However, the building has a particular feature for a datacenter: "The aesthetics of the building has been a constraint imposed by the French building architects: Rock is located within the perimeter of the Édouard Herriot Hospital, built a century ago by the architect Tony Garnier and listed as a historic monument. Thus, to comply with the specifications imposed by the French administration, it was mandatory to equip the building with windows, walled by breeze blocks since they were totally useless for computer rooms...!" notes Pierre Col in an article in ZDNet.

The operation of the infrastructures of this new datacenter has been maximized by using, for example, artificial intelligence. "An application developed by Jaguar Network teams proactively manages the installations in order to perform preventive maintenance to maximize availability. This is how technologies based on Big Data and Machine Learning are integrated into the spectrum boxes whose mission is to predictively detect any incident," says Jaguar Network.
In addition, a team is permanently present on the site "This guarantees local support to customers for the simplified operation of their IT equipment. A technician can work on any server hosted in the heart of the building in less than 10 minutes," explains the operator and cloud host.

To demonstrate its commitment to investing in digital transformation, the company has also created a network of more than 80 km of dark fibre network. This allows any company in Lyon's metropolitan area to be connected directly by dedicated and secure cables. "A dedicated 100 Gbps network will be available from February 2019 to provide the highest connection speeds available in France for businesses," says Jaguar Network.

 

 

 Read the article

 

Source : Lyon-entreprises.com

 

 

 

 

 

In Lyon, the battle for data centers has begun

on Monday, 07 January 2019 Posted in Archives Rezopole, Archives LyonIX

In Lyon, the battle for data centers has begun

These highly secure infrastructures have multiplied over the years. Indeed, ten years ago, the Lyon metropolitan area had only 1 data centre compared to 14 today. Behind this new market is a fierce financial battle.

The proof is in the fact that DCforData inaugurated its new DC just a few months ago in the 8ᵉ arrondissement of Lyon. With a surface area of 4,000 m² and already two rooms in use, "Rock" is one of the largest in the Auvergne-Rhône-Alpes region. Its customers are mainly local authorities, large companies or digital service companies in the region. But the objective is also to attract national and even global firms.
Indeed, according to Nicolas Pitance, president of DCforData, "Lyon is located on a huge telecom artery. Everyone knows the Fourvière tunnel or the Part-Dieu station, well there is also a Telecom artery from north to south Europe towards Marseille." An opinion qualified by Samuel Triolet, director of Rezopole, "Lyon is not in an ideal geographical position [...] Paris is France, so we understand its appeal. In Marseille, many trans-Mediterranean and trans-oceanic fibres are exported to Africa, the Middle East and Asia. But in Lyon, there is no similar suction effect".

However, the value of storing these servers locally is very real. First of all, from a practical point of view, since a technical intervention will be much easier and faster if the data center is located close to the company. Second, it significantly reduces latency and data transmission time in a world where data protection is becoming increasingly important. And finally, the pooling of infrastructures allows companies to make savings. "It is important for French companies to host their services on national territory because it is a sovereign hosting: companies remain subject to French data control legislation, i.e. legislation that is stricter than in the United States and their Patrioct Act" confirms Cyrille Frantz Honegger, Director of Regional Relations at SFR. This telecom giant owns the "Net Center" data center located in Vénissieux on a 7,000 m² surface.

However, there is another threat to companies: hacking and data theft.
"The data center protects against people entering the building. But it is the people who run the servers who manage computer security [...] Attempts at intrusion by hackers, even we are permanently blocked! " explains Hervé Gilquin, a researcher in applied mathematics and in charge of a DC at the Ecole Nationale Supérieure de Lyon. This school and other public institutions have chosen to store internally. Since 2017, the ENS has had its own data centre connected to the Renater network, a research telecommunications network, in order to keep the data produced as close as possible. "Researchers are developing, we have to do daily work on the site, and with all the calculations we do, our servers are in permanent operation, it would be too expensive for us to outsource," adds Hervé Gilquin.

Indeed, these CDs and the resulting requirement are very expensive. The owners of these centres rely on private and public investment, even if the latter are relatively recent. "The metropolis took time to understand the economic interest of a data centre [...] For years, Lyon's economic players were convinced that it did not create jobs. But without a local data center, the companies will first export their IT management, then finance, then marketing and finally sales", explains Samuel Triolet. Nicolas Pitance, for his part, ironicizes "Let's say that it's not as visible and selling as a football stadium".
A strategic change of direction since, according to a study carried out by Cisco, the storage capacity of data centers should be doubled by 2021. This will further increase the ambitions of companies in this sector....

 

 

 

 Read the article

 

Source : Médiacités

 

 

 

 

FaLang translation system by Faboba