From patchwork Thu Sep 17 23:06:56 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Bottomley X-Patchwork-Id: 48417 X-Patchwork-Delegate: kyle@mcmartin.ca Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n8HN7MGK002966 for ; Thu, 17 Sep 2009 23:07:24 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753569AbZIQXHU (ORCPT ); Thu, 17 Sep 2009 19:07:20 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753203AbZIQXHT (ORCPT ); Thu, 17 Sep 2009 19:07:19 -0400 Received: from cantor2.suse.de ([195.135.220.15]:33991 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751038AbZIQXHR (ORCPT ); Thu, 17 Sep 2009 19:07:17 -0400 Received: from relay2.suse.de (mail2.suse.de [195.135.221.8]) by mx2.suse.de (Postfix) with ESMTP id 8CC858655F; Fri, 18 Sep 2009 01:07:20 +0200 (CEST) From: James Bottomley To: linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-parisc@vger.kernel.org Cc: Russell King , Christoph Hellwig , Paul Mundt , James Bottomley , James Bottomley Subject: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas Date: Thu, 17 Sep 2009 18:06:56 -0500 Message-Id: <1253228821-4700-2-git-send-email-James.Bottomley@suse.de> X-Mailer: git-send-email 1.6.3.3 In-Reply-To: <1253228821-4700-1-git-send-email-James.Bottomley@suse.de> References: <1253228821-4700-1-git-send-email-James.Bottomley@suse.de> Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org From: James Bottomley On Virtually Indexed architectures (which don't do automatic alias resolution in their caches), we have to flush via the correct virtual address to prepare pages for DMA. On some architectures (like arm) we cannot prevent the CPU from doing data movein along the alias (and thus giving stale read data), so we not only have to introduce a flush API to push dirty cache lines out, but also an invalidate API to kill inconsistent cache lines that may have moved in before DMA changed the data Signed-off-by: James Bottomley --- include/linux/highmem.h | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 211ff44..eb99c70 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page static inline void flush_kernel_dcache_page(struct page *page) { } +static inline void flush_kernel_dcache_addr(void *vaddr) +{ +} +static inline void invalidate_kernel_dcache_addr(void *vaddr) +{ +} #endif #include