Feed aggregator

[$] PostgreSQL's fsync() surprise

LWN Headlines -

Developers of database management systems are, by necessity, concerned about getting data safely to persistent storage. So when the PostgreSQL community found out that the way the kernel handles I/O errors could result in data being lost without any errors being reported to user space, a fair amount of unhappiness resulted. The problem, which is exacerbated by the way PostgreSQL performs buffered I/O, turns out not to be unique to Linux, and will not be easy to solve even there.

Elon Musk's Alleged Email To Employees on Tesla's Big Picture

Slashdot -

An email allegedly sent by Elon Musk to Tesla staff has announced that the Model 3, which has faced a number of production issues, will go into "24/7" production by June, resulting in 6,000 Model 3 units made per week. But apart from this update, in the email, Elon Musk sheds light on how much he values precision in his cars. An excerpt: Most of the design tolerances of the Model 3 are already better than any other car in the world. Soon, they will all be better. This is not enough. We will keep going until the Model 3 build precision is a factor of ten better than any other car in the world. I am not kidding. Our car needs to be designed and built with such accuracy and precision that, if an owner measures dimensions, panel gaps and flushness, and their measurements don't match the Model 3 specs, it just means that their measuring tape is wrong. Some parts suppliers will be unwilling or unable to achieve this level of precision. I understand that this will be considered an unreasonable request by some. That's ok, there are lots of other car companies with much lower standards. They just can't work with Tesla.

Read more of this story at Slashdot.

Aten Design Group: Consolidating Websites Using Pantheon Upstreams

Drupal Planet -

Over the last couple years, organizations have been coming to us with a new problem. Instead of needing a single website, they need dozens, if not hundreds. They might be large universities with many departments, an association with independent franchises nationwide, or a real estate developer with offerings all over the world. Often, these organizations already have several websites supported by different vendors and technologies. They’ve become frustrated with the overhead of maintaining Drupal sites built by one vendor and Wordpress sites by another. Not to mention the cost of building new websites with a consistent look and feel.

While the details may vary, the broad ask is the same. How can we consolidate various websites onto a single platform that can be spun up quickly (preferably without developer involvement) and update and maintain these en masse, while maintaining enough structure for consistency and flexibility for customization. Essentially, they want to have their cake and would also like to eat it.

Over this series of posts, we’ll break down the various parts of this solution. We’ll first look at Pantheon’s hosting solution, and how its infrastructure is set up perfectly to give clients the autonomy they want. Then we’ll look at the command line tools that exist for developers to easily manage updates to dozens (if not hundreds) of websites. Lastly, we’ll look at the websites themselves and how Drupal 8 was leveraged to provide flexible website instances with structured limits.

Pantheon and Upstreams

Pantheon is a hosting solution designed specifically for Drupal and Wordpress websites. For individual sites they offer a lot of features, however the ones we’re most interested in are single click installations of a new website and single click updates to the code base. Using a feature called Upstreams, users can create fresh installs of Drupal 7, Drupal 8, or Wordpress that all reference a canonical codebase. When new code is pushed to any of those Upstreams, any site installed from it gets notified of the new code, which can be pulled into the instance with the click of a button.

Outside of the default options Pantheon maintains internally, developers can also build their own Custom Upstreams for website creation. Anyone with access to the Upstream can log into Pantheon and click a button to install a new website based on that codebase. In short this codebase will handle installing all of the features every website should have, establish any default content necessary, and be used to roll out new features to the entire platform. This setup allows non-technical users to easily create new websites for their various properties, and then handoff specific websites to their appropriate property managers for editing. We’ll go over more specifics of this codebase in a later post.

Since a developer is no longer required for the creation of individual sites, this frees up a lot of time (and budget) for building new features or keeping on top of maintenance. The process for rolling out updates is simple: the developer writes code for a new feature and pushes it to the upstream repository. Once pushed, every site connected to this upstream will get an alert about new features and a shiny button that pulls them in with a single click.

Pantheon and Organizations

At this point it’s worth mentioning that custom upstreams are a feature of a special account type called an Organization. An organization is used to group multiple websites, users, and Custom Upstreams under one umbrella. Organizations also come with additional features like free HTTPS and code monitoring services. It’s recommended that each organization signup with their own organization account, rather than use one tied to their development partner. This gives them full control over who can create new websites using their Custom Upstream, who can manage all their websites, and who can only access specific websites.

Organization accounts and Custom Upstreams go a long way in helping organizations reduce the overhead they may have from managing several properties simultaneously. Having the option to create an infinite number of websites in-house helps reduce the cost of growth. Having every website using the same codebase means new features can easily be rolled out to the entire platform and security vulnerabilities can be handled quickly.

The only downside with this approach is updates are generally applied one site at a time. The developer can push the code to the Custom Upstream, but it’s necessary to log into every website and click the button to update that site. For a handful of sites, this might be manageable. For dozens to hundreds, this problem becomes tedious. In the next post we’ll look at some of the scripted solutions Pantheon has for applying and managing an ever growing number of websites at once.

Security advisories: Drupal core - Moderately critical - Cross Site Scripting - SA-CORE-2018-003

Drupal Planet -

Project: Drupal coreDate: 2018-April-18Security risk: Moderately critical 12∕25 AC:Complex/A:User/CI:Some/II:Some/E:Theoretical/TD:DefaultVulnerability: Cross Site ScriptingDescription: 

CKEditor, a third-party JavaScript library included in Drupal core, has fixed a cross-site scripting (XSS) vulnerability. The vulnerability stemmed from the fact that it was possible to execute XSS inside CKEditor when using the image2 plugin (which Drupal 8 core also uses).

We would like to thank the CKEditor team for patching the vulnerability and coordinating the fix and release process, and matching the Drupal core security window.

Solution: 
  • If you are using Drupal 8, update to Drupal 8.5.2 or Drupal 8.4.7.
  • The Drupal 7.x CKEditor contributed module is not affected if you are running CKEditor module 7.x-1.18 and using CKEditor from the CDN, since it currently uses a version of the CKEditor library that is not vulnerable.
  • If you installed CKEditor in Drupal 7 using another method (for example with the WYSIWYG module or the CKEditor module with CKEditor locally) and you’re using a version of CKEditor from 4.5.11 up to 4.9.1, update the third-party JavaScript library by downloading CKEditor 4.9.2 from CKEditor's site.
Reported By: Fixed By: 

Security updates for Wednesday

LWN Headlines -

Security updates have been issued by Debian (freeplane and jruby), Fedora (kernel and python-bleach), Gentoo (evince, gdk-pixbuf, and ncurses), openSUSE (kernel), Oracle (gcc, glibc, kernel, krb5, ntp, openssh, openssl, policycoreutils, qemu-kvm, and xdg-user-dirs), Red Hat (corosync, glusterfs, kernel, and kernel-rt), SUSE (openssl), and Ubuntu (openssl and perl).

Russia Admits To Blocking Millions of IP Addresses

Slashdot -

It turns out, the Russian government, in its quest to block Telegram, accidentally shut down several other services as well. From a report: The chief of the Russian communications watchdog acknowledged Wednesday that millions of unrelated IP addresses have been frozen in a so-far futile attempt to block a popular messaging app. Telegram, the messaging app that was ordered to be blocked last week, was still available to users in Russia despite authorities' frantic attempts to hit it by blocking other services. The row erupted after Telegram, which was developed by Russian entrepreneur Pavel Durov, refused to hand its encryption keys to the intelligence agencies. The Russian government insists it needs them to pre-empt extremist attacks but Telegram dismissed the request as a breach of privacy. Alexander Zharov, chief of the Federal Communications Agency, said in an interview with the Izvestia daily published Wednesday that Russia is blocking 18 networks that are used by Amazon and Google and which host sites that they believe Telegram is using to circumvent the ban.

Read more of this story at Slashdot.

Iran Bans State Bodies From Using Telegram App, Khamenei Shuts Account

Slashdot -

Iran banned government bodies on Wednesday from using the popular Telegram instant messaging app as Supreme Leader Ayatollah Ali Khamenei's office said his account would shut down to protect national security, Iranian media reported. From a report: ISNA news agency did not give a reason for the government ban on the service which lets people send encrypted messages and has an estimated 40 million users in the Islamic Republic. The order came days after Russia -- Iran's ally in the Syrian war -- started blocking the app in its territory following the company's repeated refusal to give Russian state security services access to users' secret messages. Iran's government banned "all state bodies from using the foreign messaging app," according to ISNA.

Read more of this story at Slashdot.

Amazon and Best Buy Team Up To Sell Smart TVs

Slashdot -

Amazon and Best Buy want to sell you your next smart TV. From a report: The companies, which are two of the biggest electronics retailers in the US, on Wednesday revealed a new multiyear partnership to sell the next generation of TVs running Amazon's Fire TV operating system to customers in the US and Canada. Best Buy will be the exclusive seller for more than 10 4K and HD Fire TV Edition models made by Toshiba and Best Buy's Insignia brand starting this summer. Pricing on the sets has not yet been announced. These smart TVs will be available only in Best Buy stores, on BestBuy.com and, for the first time, from Best Buy as a seller on Amazon.com.

Read more of this story at Slashdot.

InternetDevels: The Masquerade module: see your Drupal site through each user’s eyes!

Drupal Planet -

Let us invite you to an exciting masquerade! Its mission is to check what each user can see or do on your website. Drupal has an awesomely flexible system of user roles and permissions, as well as opportunities for fine-grained user access. These are the keystones of Drupal security, smooth user experiences, and cool features. You can make the most out of them, and then test the result for different users with the help of the Masquerade module.

Read more

Chrome 66 Arrives With Autoplaying Content Blocked By Default

Slashdot -

An anonymous reader quotes a report from VentureBeat: Google today launched Chrome 66 for Windows, Mac, Linux, and Android. The desktop release includes autoplaying content muted by default, security improvements, and new developer features. You can update to the latest version now using the browser's built-in silent updater or download it directly from google.com/chrome. In our tests, autoplaying content that is muted still plays automatically. Autoplaying content with sound, whether it has visible controls or not, and whether it is set to play on loop or not, simply does not start playing. Note that this is all encompassing -- even autoplaying content you are expecting or is the main focus of the page does not play. YouTube videos, for example, no longer start playing automatically. And in case that's not enough, or if a page somehow circumvents the autoplaying block, you can still mute whole websites.

Read more of this story at Slashdot.

Lullabot: Decoupled Drupal Summit at DrupalCon Nashville

Drupal Planet -

This first-ever Decoupled Summit at DrupalCon Nashville was a huge hit. Not only did it sell out but the room was packed to the gills, literally standing room only. Decoupled Drupal is a hot topic these days. The decoupled summit was an opportunity to look at the state of decoupled Drupal, analyze pros and cons of decoupling, and look at decoupling strategies and examples. There is lots of interest in decoupling, but there are still many hard problems to solve, and it isn’t the right solution for every situation. This summit was an opportunity to assess the state of best practices.

The summit was organized by Lullabot's Sally Young and Mediacurrent's Matt Davis, two of the innovators in this space.

What is “decoupled Drupal”? 

First, a quick explanation of what “decoupled Drupal” means, in case you haven’t caught the fever yet. Historically, Drupal is used to deliver all the components of a website, an approach that can be called “traditional,” “monolithic,” or “full stack” Drupal. In this scenario, Drupal provides the mechanism to create and store structured data, includes an editorial interface that allows editors to add and edit content and set configuration, and takes responsibility for creating the front-end markup that users see in their browsers. Drupal does it all.

“Decoupled”, or “headless” Drupal is where a site separates these website functions across multiple web frameworks and environments. That could mean managing data creation and storage in a traditional Drupal installation, but using React and Node.js to create the page markup. It could also mean using a React app as an editorial interface to a traditional Drupal site. 

Drupal tools and activity

Drupal core is enabling this activity through a couple of core initiatives:

Drupal and the Drupal community have numerous tools available to assist in creating a decoupled site:

  • Contenta, a pre-configured decoupled Drupal distribution.
  • Waterwheel, an emerging ecosystem of software development kits (SDKs) built by the Drupal community.
  • JSON API, an API that allows consumers to request exactly the data they need, rather than being limited to pre-configured REST endpoints.
  • GraphQL, another API that allows consumers to request only the data they want while combining multiple round-trip requests into one.

There’s lots of activity in headless CMSes. But the competitors are proprietary. Drupal and WordPress are the only end-to-end open source contenders. The others only open source the SDKs.

Highlights of the summit

The summit included several speakers, a business panel, and some demonstrations of decoupled applications. Participants brought up lots of interesting questions and observations. I jotted down several quotes, but it wasn't always possible to give attribution with such an open discussion, so my apologies in advance. Some general reflections from my notes:

Why decouple?
  • More and more sites are delivering content to multiple consumers, mobile apps, TV, etc. In this situation, the website can become just another consumer of the data.
  • It’s easier to find generalist JavaScript developers than expert Drupal developers. Decoupling is one way to ensure the front-end team doesn't have to know anything about Drupal.
  • If you have large teams, a decoupled site allows you to have a clean separation of duties, so the front and back end can work rapidly in parallel to build the site.
  • A modern JavaScript front-end can be fast—although several participants pointed out that a decoupled site is not automatically faster. You still need to pay attention to performance issues.
  • Content is expensive to create; decoupling is a way to re-use it, not just across platforms, but also from redesign to redesign.
  • You could launch a brand new design without making any changes to the back end, assuming you have a well-designed API (meaning an API that doesn't include any assumptions about what the front end looks like). As one participant said, “One day, React won't be cool anymore, we'll need to be ready for the next big thing.”
What are some of the complications?
  • It often or always costs more to decouple than to build a traditional site. There’s additional infrastructure, the need to create new solutions for things that traditional Drupal already does, and the fact that we’re still as a community figuring out the best practices.
  • If you only need a website, decoupling is a convoluted way to accomplish it. Decoupling makes sense when you are building an API to serve multiple consumers.
  • You don’t have to decouple to support other applications. Drupal can be a full-featured website, and also the source of APIs.
  • Some tasks are particularly tricky in a decoupled environment, like previewing content before publishing it. Although some participants pointed out that in a truly decoupled environment preview makes no sense anyway. “We have a bias that a node is a page, but that’s not true in a decoupled context. There is no concept of a page on a smartphone. Preview is complicated because of that.”
  • Many businesses have page-centric assumptions embedded deep into their content and processes. It might be difficult to shift to a model where editors create content that might be deployed in many different combinations and environments. One participant discussed a client that "used all the decoupled technology at their disposal to build a highly coupled CMS." On the other hand, some clients are pure Drupal top to bottom, but they have a good content model and are effectively already "decoupling" their content from its eventual display.
  • Another quote, “Clients trying to unify multiple properties have a special problem; they have to swallow that there will have to be a unified content model in order to decouple. Otherwise, you're building numerous decoupled systems.”
  • Once you are decoupled, you may not even know who is consuming the APIs or how they're being used. If you make changes, you may break things outside of your website. You need to be aware of the dependency you created by serving an API.
Speakers and Panelists

The following is a list of speakers and panelists. These are people and companies you could talk to if you have more questions about decoupling:

  • Sally Young (Lullabot)
  • Matt Davis (Mediacurrent)
  • Jeff Eaton (Lullabot)
  • Preston So (Acquia)
  • Matt Grill (Acquia)
  • Daniel Wehner (TES)
  • Wes Ruvalcaba (Lullabot)
  • Mateu Aguiló Bosch (Lullabot)
  • Suzi Arnold (Comcast)
  • Jason Oscar (Comcast)
  • Jeremy Dickens (Weather.com)
  • Nichole Davison (Edutopia)
  • Baddy Breidert (1xinternet)
  • Christoph Breidert (1xinternet)
  • Patrick Coffey (Four Kitchens)
  • Greg Amaroso (Softvision)
  • Eric Hestenes(Edutopia)
  • David Hwang (DocuSign)
  • Shellie Hutchens (Mediacurrent)
  • Karen Stevenson (Lullabot)
Summary

It was a worthwhile summit, I learned a lot, and I imagine others did as well. Several people mentioned that Decoupled Drupal Days will be taking place August 17-19, 2018 in New York City (there is a link to last year's event). The organizers say it will be “brutally honest, not a cheerleading session.” And they’re also looking for sponsors. I’d highly recommend marking those days on your calendar if you’re interested in this topic!

The Proprietary Format

The Daily WTF -

Have you ever secured something with a lock? The intent is that at some point in the future, you'll use the requisite key to regain access to it. Of course, the underlying assumption is that you actually have the key. How do you open a lock once you've lost the key? That's when you need to get creative. Lock picks. Bolt cutters. Blow torch. GAU-8...

In 2004, Ben S. went on a solo bicycle tour, and for reasons of weight, his only computer was a Handspring Visor Deluxe PDA running Palm OS. He had an external, folding keyboard that he would use to type his notes from each day of the trip. To keep these notes organized by day, he stored them in the Datebook (calendar) app as all-day events. The PDA would sync with a desktop computer using a Handspring-branded fork of the Palm Desktop software. The whole Datebook could then be exported as a text file from there. As such, Ben figured his notes were safe. After the trip ended, he bought a Windows PC that he had until 2010, but he never quite got around to exporting the text file. After he switched to using a Mac, he copied the files to the Mac and gave away the PC.

Ten years later, he decided to go through all of the old notes, but he couldn't open the files!

Uh oh.

The Handspring company had gone out of business, and the software wouldn't run on the Mac. His parents had the Palm-branded version of the software on one of their older Macs, but Handspring used a different data file format that the Palm software couldn't open. His in-laws had an old Windows PC, and he was able to install the Handspring software, but it wouldn't even open without a physical device to sync with, so the file just couldn't be opened. Ben reluctantly gave up on ever accessing the notes again.

Have you ever looked at something and then turned your head sideways, only to see it in a whole new light?

One day, Ben was going through some old clutter and found a backup DVD-R he had made of the Windows PC before he had wiped its hard disk. He found the datebook.dat file and opened it in SublimeText. There he saw rows and rows of hexadecimal code arranged into tidy columns. However, in this case, the columns between the codes were not just on-screen formatting for readability, they were actual space characters! It was not a data file after all, it was a text file.

The Handspring data file format was a text file containing hexadecimal code with spaces in it! He copied and pasted the entire file into an online hex-to-text converter (which ignored the spaces and line breaks), and voilà , Ben had his notes back!

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Dropsolid: Varnish for Drupal 8

Drupal Planet -

18 Apr Varnish for Drupal 8: the need for speed Niels A Drupal Drupal 8

Our team had been using Varnish a long time for our Dropsolid Drupal 7 project, and we thought the time had come to get it working for Drupal 8 as well. That is why our CTO, Nick Veenhof, organized a meetup about Caching & Purging in Drupal 8. Niels van Mourik gave an elaborate presentation about the Purge module and how it works.
I definitely recommend watching the video and the slides on his blog. In this blog post, we’ll elaborate and build on what Niels explained to us that day. 
First, let’s start off with a quick crash course on what Varnish actually is and how it can benefit your website.

 

Varnish 101

“Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.” (Source: Varnish-cache.org)

In layman’s terms, Varnish will serve a webpage from its own internal cache if it has it available. This drastically reduces the number of requests to the webserver where your application is hosted. This will, in turn, free up resources on your webserver, so your web application can handle more complicated tasks and more users.

In short, Varnish will make your web application faster and will allow you to scale it more efficiently.


How we use Varnish and Drupal 7

How did things typically work in D7? Well, you’d put a Varnish server with a Drupal compatible Varnish configuration file (vcl) in front of your Drupal 7 site and it would start caching it right away - depending, of course, on what is in the vcl and the headers your Drupal site sent.
Next step would be to install the Varnish module from Drupal.org. This module’s sole purpose is to invalidate your Varnish cache and it does so using telnet. This also requires the Varnish server to be accessible from the Drupal backend. This isn’t always an ideal scenario, certainly not when multiple sites are being served from the same Varnish.

The biggest issue when using Drupal 7 with the Varnish module is that invalidation of content just isn’t smart enough. For instance, if you would update one news item you’d only want that page and the ones where the news item is visible to be removed from Varnish’s cache. But that isn’t possible. This isn’t the module’s fault at all - it’s simply the way Drupal 7 was built. There are a few alternatives that do make it a little smarter, but these solutions aren’t foolproof either.

Luckily, Drupal 8 is a whole new ballgame!


How we use Varnish with Drupal 8

Drupal 8 in itself is very smart at caching and it revolves around the three following main pillars (explained from a page cache perspective, it will also cache parts of pages):

  • Cache tags: A page will get a list of tags based on the content that is on it. For instance if you have a news overview, all rendered news items will be added as tags to that page. Allowing you to invalidate that cache if one of those news items change. 
  • Cache context: A page can be different based based on variables from the current request. For instance if you have a news overview that filters out the news items based on a query parameter. 
  • Cache max-age: A page can be served from cache for X amount of time. After the time has passed it needs to be built up again. 

You can read more about Drupal 8’s new caching system here.


All about invalidation

Niels van Mourik created a module called Purge. This is a modular external cache invalidation framework. It leverages Drupal’s cache system to provide easy access to the cache data so you only need to focus on the communication with an external service. It already has a lot of third-party integrations available like Acquia Purge, Akamai and Varnish Purge.

We are now adding another one to the list: the Dropsolid Purge module.


Why does Dropsolid Purge do and why do I need it?

The Dropsolid Purge module enables you to invalidate caches in multiple Varnish load balancers. It also lets you cache multiple web applications by the same Varnish server. The module was very heavily inspired by the Acquia purge module and we reused a lot of the initial code, because it has a smart way of handling the invalidation through tags, but we’ll get in to that a little later. The problem with the Acquia purge module is that it is designed to work on Acquia Cloud, because it depends on certain environment variables and the Varnish configuration is proprietary knowledge of Acquia. This means that it isn’t usable on another environments. 

We also experimented with the Varnish purge module, but it lacked support for cache invalidation in case you have multiple sites/multisites cached by a single Varnish server. This is because the module actually doesn’t tell Varnish which site it should invalidate pages for, so it just invalidates pages for all the sites. It also doesn’t have the most efficient way of passing along the invalidation requests. It contains two ways of sending invalidation request to Varnish: one by one or bundled together. The one by one option results in a lot of requests if you know that updating a single node could easily invalidate 30 tags. Using the Bundle purger could get you to reach the limit of your header size, but more on that later.


What's in the bag?

Currently we provide the following features:

  • Support for tag invalidation and everything invalidation,
  • The module will only purge tags for the current site by using the X-Dropsolid-Site header,
  • The current site is defined by the name you set in config and the subsite directory,
  • Support for multiple load balancers,
  • There is also a default vcl in the examples folder that contains the logic for the bans.

It can be used for any environment if you just follow the installation instructions in the readme.


Under the hood Preparing and handling the responses for/by Varnish

By default, the module will add two headers to every response it gives:

  • X-Dropsolid-Site: A unique site identifier as a hash based on config (you provide through settings.php) and site parameters:
    • A site name
    • A site environment
    • A site group
    • The path of your site (e.g. sites/default or sites/somesubsite)
  • X-Dropsolid-Purge-Tags: A hashed version of each cache tag on the current page. (hashed to keep the length low and avoid hitting the maximum size of the header)

When the response reaches Varnish, it will save those headers along with the cache object. This will allow us to target these specific cache objects for invalidation.

In our vcl file we also strip those headers, so they aren’t visible by the end user:

sub vcl_deliver { unset resp.http.X-Dropsolid-Purge-Tags; unset resp.http.X-Dropsolid-Site; }
Invalidation of pages

For Drupal 8 we no longer use telnet to communicate with Varnish, but we use a BAN request instead. This request will get sent from our site, and it will only be accepted when it comes from our site.  We currently do this by validating the IP of the request against a list of IPs that are allowed to do BAN requests. 

As we mentioned earlier, we provide two ways of invalidating cached pages in Varnish:

  • Tag invalidation: We invalidate pages which have the same cache tags as we send in our BAN request to Varnish.
  • Everything invalidation: We invalidate all pages which are from a certain site.

Tag invalidation

Just like the Acquia purge module, we send a BAN request which contains a group of 12 hashed cache tags which then will be compared to what Varnish has saved. We also pass along the unique site identifier so we indicate we only want to invalidate for a specific site.

Our BAN request has the following headers:

  • X-Dropsolid-Purge: Unique site identifier
  • X-Dropsolid-Purge-Tags: 12 hashed tags

When Varnish picks up this request, it will go through the following logic:

sub vcl_recv { # Only allow BAN requests from IP addresses in the 'purge' ACL. if (req.method == "BAN") { # Same ACL check as above: if (!client.ip ~ purge) { return (synth(403, "Not allowed.")); } # Logic for banning based on tags # https://Varnish-cache.org/docs/trunk/reference/vcl.html#vcl-7-ban if (req.http.X-Dropsolid-Purge-Tags) { # Add bans for tags but only for the current site requesting the ban ban("obj.http.X-Dropsolid-Purge-Tags ~ " + req.http.X-Dropsolid-Purge-Tags + " && obj.http.X-Dropsolid-Site == " + req.http.X-Dropsolid-Purge); return (synth(200, "Ban added.")); } }

We check if the request comes from an IP that is whitelisted. We then add bans for every cache object that matches our unique site identifier and matches at least one of the cache tags we sent along. 

You can easily test this by updating a node and seeing that Varnish will be serving you a new version of its page. 
 

Everything invalidation

When the everything invalidation is triggered, a BAN request is sent with the following headers:

  • X-Dropsolid-Purge-All: True
  • X-Dropsolid-Purge: Unique site identifier

And we execute the following logic on Varnish’s side:
 

sub vcl_recv { # Only allow BAN requests from IP addresses in the 'purge' ACL. if (req.method == "BAN") { # Same ACL check as above: if (!client.ip ~ purge) { return (synth(403, "Not allowed.")); } # Logic for banning everything if (req.http.X-Dropsolid-Purge-All) { # Add bans for the whole site ban("obj.http.X-Dropsolid-Site == " + req.http.X-Dropsolid-Purge); return (synth(200, "Ban added.")); } } }

When Varnish receives a BAN request with the X-Dropsolid-Purge-All header, it will ban all cache object that have the same unique site identifier. You can easily test this by executing the following command: drush cache-rebuild-external.

Beware: a normal drush cache-rebuild will not invalidate an external cache like Varnish.


Why this matters

To us, this is yet another step in making our cache smarter, our web applications faster and our servers leaner. If you have any questions about this post, you can always leave a comment in the comment section below or open an issue on drupal.org.

Are you looking for a partner that will help you to speed up your site, without having to switch hosting? The Dropsolid Platform helps you to adjust and streamline your development processes, without the typical vendor lock-in of traditional hosting solutions. At Dropsolid, we also offer dedicated hosting, but we never enforce our own platform. Dropsolid helps you to grow your digital business - from every possible angle!

 

Blog overview     Get in touch

Huawei To Back Off US Market Amid Rising Tensions

Slashdot -

Huawei is reportedly going to give up on selling its products and services in the United States (Warning: source may be paywalled; alternative source) due to Washington's accusations that the company has ties to the Chinese government. The change in tactics comes a week after the company laid off five American employees, including its biggest American lobbyist. The New York Times reports: Huawei's tactics are changing as its business prospects in the United States have darkened considerably. On Tuesday, the Federal Communications Commission voted to proceed with a new rule that could effectively kill off what little business the company has in the United States. Although the proposed rule does not mention Huawei by name, it would block federally subsidized telecommunications carriers from using suppliers deemed to pose a risk to American national security. Huawei's latest moves suggest that it has accepted that its political battles in the United States are not ones it is likely to win. "Some things cannot change their course according to our wishes," Eric Xu, Huawei's deputy chairman, said at the company's annual meeting with analysts on Tuesday. "With some things, when you let them go, you actually feel more at ease."

Read more of this story at Slashdot.

Southwest Airlines Engine Failure Results In First Fatality On US Airline In 9 Years

Slashdot -

schwit1 shares a report from Heavy: Tammie Jo Shults is the pilot who bravely flew Southwest Flight 1380 to safety after part of its left engine ripped off, damaging a window and nearly sucking a woman out of the plane. The flight was en route to Dallas Love airport from New York City, and had to make an emergency landing in Philadelphia. Shults, 56, kept her cool during an incredibly intense situation, audio from her conversation with air traffic controllers reveals, while many passengers posted on social media that they were scared these were their last moments. She, with the help of the co-pilot and the rest of the crew, landed the plane safely. The NTSB reported that there was one fatality out of 143 passengers on board. Some passengers said that someone had a heart attack during the flight, but it's not yet known if this was the fatality reported by the NTSB. The woman who died has been identified by KOAT-TV as Jennifer Riordan, 43, of Albuquerque, New Mexico.

Read more of this story at Slashdot.

Cambridge Analytica Planned To Launch Its Own Cryptocurrency

Slashdot -

Cambridge Analytica, the data analytics firm that harvested millions of Facebook profiles of U.S. voters, attempted to develop its own cryptocurrency this past year and intended to raise funds through an initial coin offering. The digital coin would have helped people store online personal data and even sell it, former Cambridge Analytica employee Brittany Kaiser told The New York Times. The Verge reports: Cambridge Analytica, which obtained the data of 87 million Facebook users, was hoping to raise as much as $30 million through the venture, anonymous sources told Reuters. Cambridge Analytica confirmed to Reuters that it had previously explored blockchain technology, but did not confirm the coin offering and didnâ(TM)t say whether efforts are still underway. The company also reportedly attempted to promote another digital currency behind the scenes. It arranged for potential investors to take a vacation trip to Macau in support of Dragon Coin, a cryptocurrency aimed at casino players. Dragon Coin has been supported by a Macau gangster Wan Kuok-koi, nicknamed Broken Tooth, according to documents obtained by the Times. Cambridge Analytica started working on its own initial coin offering mid-2017 and the initiative was overseen in part by CEO Alexander Nix and former employee Brittany Kaiser. The companyâ(TM)s plans to launch an ICO were still in the early stages when Nix was suspended last month and the Facebook data leak started to gain public attention.

Read more of this story at Slashdot.

FDA Approves First Contact Lenses That Turn Dark In Bright Sunlight

Slashdot -

The first photochromic contact lenses have been approved by the FDA. "A unique additive will automatically darken the lenses when they're exposed to bright light," reports Interesting Engineering, citing a FDA statement. "The lenses will clear up whenever they're back in normal or darker lighting conditions." From the report: "This contact lens is the first of its kind to incorporate the same technology that is used in eyeglasses that automatically darken in the sun," said Malvina Eydelman. Eydelman serves as director of the division of ophthalmic, and ear, nose and throat devices at the FDA's Center for Devices and Radiological Health. The FDA approved the technology after extensive trials and clinical studies. One study had 24 wearers use the contacts while driving in both daytime and nighttime settings. The FDA found that there were no problems with driving performance or issues with vision while wearing those contact lenses. In total, over 1,000 patients were involved in the various studies conducted by the FDA. According to current plans, these photochromic lenses should be available for those needing them by the first half of 2019.

Read more of this story at Slashdot.

Pages

Subscribe to Heydon Consulting aggregator